Tab Completion

I'm Tab Atkins Jr, and I wear many hats. I work for Google on the Chrome browser as a Web Standards Hacker. I'm also a member of the CSS Working Group, and am either a member or contributor to several other working groups in the W3C. You can contact me here.
Listing of All Posts

Newbie's Guide to Python Module Management

Last updated:

I've always been confused by Python modules and how they work, but was able to kinda muddle thru on my own, as I expect most of us do. I recently sat down and actually taught myself how they work, and I think I have a handle on them now. To test it, I refactored a chunk of code in one of my projects that was bothering me, and as far as I can tell, I did so successfully!

This post documents that process, to hopefully help others in figuring stuff out.

The Problem

My main open-source project, Bikeshed, has to maintain a set of data files. These get updated frequently, so users can call bikeshed update to get new data for them, straight from the data sources. Each data source gets its own independent processing; there's not really any shared code between the different data files.

Originally there were only two types of data, and I wasn't doing anything too complicated with either of them, so I just went ahead and crammed both of the update functions into the same file, update.py. Fast-forward two years, and I've now got six independent update functions in this file, and several of them have gotten substantially more complicated. Refactoring code into helper functions is becoming hard, because it makes it more difficult to find the "main" functions buried in the sea of code.

What I'd really like is for each independent updater function, and its associated suite of helper functions, to live in a separate file. But I've already got a lot of files in my project - it would be great to have them all grouped into a subfolder.

Intro to Python Packages/Modules

So each foo.py file in your project automatically defines a module, named foo. You can import these files and get access to their variables with from . import foo, or from .foo import someVariable. (This is using absolute package-relative imports, which you should be using, not the "implicit relative imports" that Python2 originally shipped with; the . indicates "look in this module's parent".)

Each foo folder in your project defines a package named foo, if the folder has an __init__.py file in it. Packages are imported exactly like modules, with from . import foo/etc; the only difference is that packages can contain submodules (and subpackages) in addition to variables. This is how you get imports like import foo.bar.baz - foo and bar are packages (with bar a subpackage of foo), baz is either a package or a module.

Whenever you import a package, Python will run the __init__.py file and expose its variables for importing. (This is all the global variable names the code in the module can see, including modules that that the code imports!) It also automatically exposes any submodules in the package, regardless of whether __init__.py imports them or not: you can write import foo.bar if the foo/ folder contains a bar.py file, without foo/__init__.py having to do anything special. (Same for nested packages.)

Finally, whenever you do a * import (like from foo import *), Python will go ahead and pull in all the variables that foo/__init__.py defines and dump them into your namespace, but it does not dump submodules in unless __init__.py explicitly imported them already. (This is because the submodules might not be supposed to be part of the public API, and importing may have side-effects, since it just runs an __init__.py, and you might not want those side-effects to automatically happen.) Instead, it looks to see if __init__.py defined a magical __all__ variable; if it did, it assumes it's a list of strings naming all the submodules that should be imported by a * import, and does so.

(AKA, if your __init__.py already imports all the submodules you use or intend to expose, you're fine. If there are more that __init__.py doesn't use, but you want to expose to *-imports, set __all__ = ["sub1", "sub2"] in __init__.py.)

The Solution

So now we have all the information we need.

Step 1 is creating an update/ folder, and adding a blank __init__.py file. We now have an update package ready to import, even tho it's empty right now.

Step 2 is copying out all the code into submodules; I created an update/updateCrossRefs.py file and copied the cross-ref updater code into it, and so on. Now that the code is in separate files, I can rename the updater functions to all be just def update() for simplicity; no need to mention what they're updating when that's already in the module name.

Now that the code has moved from a top-level module in my project to a submodule, their import statements are wrong - anything that mentions from . import foo will look in the update package, not the overall project. Easy to fix, I just have to change these to from .. import foo; you can add as many dots as you want to move further up the package tree if you need.

At this point I'm already mostly done; I can run import update, then later call update.updateCrossRefs.update(), and it magically works! The last step is in handling "global" code, and putting together a good __init__.py.

For Step 3, I have one leftover piece of code, the general update() function that updates everything (or whatever subset of stuff I want). This is the only function the outside world ever actually calls; it's the only thing that calls the more specific updaters.

There's a few ways to do this - you can just put it directly in __init__.py and call it a day. But that exposes the imports it uses, and I want to keep the update module’s API surface nice and clean. Instead, I create another submodule, main.py, and put the function over there. Then, in __init__.py, I just call from .main import update. Now the outside world can say from . import update, and then just call update.update(), without having to know that the function is actually defined in a submodule.

Now that this is all done, I can finally delete the original update.py file in my main project directory. It's empty at this point, after all. ^_^

The End Result

I end up with the following directory structure:

bikeshed/
  ...other stuff...
  update/
    __init__.py
    main.py
    updateCrossRefs.py
    updateBiblio.py
    ...

And my __init__.py just says:

from .main import update, fixupDataFiles

__all__ = ["updateCrossRefs", "updateBiblio", 
           "updateCanIUse", "updateLinkDefaults", 
           "updateTestSuites", "updateLanguages"]

Then my project code, which was already doing from . import update, and calling update.update() (or update.fixupDataFiles()), continues to work and never realizes anything has changed at all!

How To: Clone your dang website you manage via git

Last updated:

This is a reminder post for myself, because I just had to re-clone my website on my local machine and futzed about for half an hour before figuring it out.

So, you've already set up your website as a bare git repository. Good for you! You've also got ssh working, and remember your ssh login details: the @ address and the password. The last thing you need is the location, on the remote server, of the actual git repository. (In my case, /home/public, the directory structure nearlyfreespeech.net uses by default.)

Now just:

git clone <ssh-address>:<remote-location> website

It'll ask for a password, then ta-da!

I'll update this to remind myself about getting the ssh key set up correctly, so I don't have to type a password on every push, later if I remember to.

Why I Abandoned @apply

Last updated:

Some time ago, I created the modern spec for CSS Variables, which lets you store uninterpreted values into "custom properties", then substitute those values into real properties later on.

This worked great, and was particularly convenient for Shadow DOM, which wanted a simple, controllable way to let the outside page twiddle the styling of a component, but without giving them full access to the internals of the component. The component author could just use a couple of var() functions, with default values so that it worked automatically, and then the component user could set custom properties on the shadow host and let them inherit into the component.

After a while, tho, we realized that this still had some issues. It was perfect for small amounts of customization - providing a "theme color" for a component or similar - but it fell down when you wanted to allow arbitrary styling of a component. For example, an alert widget might want to expose its title area to whatever inline styling the user wants. If the component could use light-DOM elements, pulled into the shadow DOM via slots, this works fine, as the user can target and fully style that element, but if the component itself generated the element, they were out of luck. The only way to get close is to define a whole bunch of custom properties, mimicking the actual CSS properties you want to allow, and then apply them all to the element in the shadow's stylesheet. This is awkward at best, and at worst can mean literally hundreds of boilerplate custom properties per styleable element, which is ugly, awkward, and slow - the trifecta!

So I thought - hey, custom properties can hold anything, right? Why not, instead of holding a single value, they held an entire ruleset, and then I add a new capability to let you substitute the whole shebang into an element's style? And thus the @apply rule was born:

/* outer page */
x-component {
  --heading-style: {
    color: red;
    text-decoration: underline;
    font-weight: bold;
  };
}
/* shadow DOM stylesheet */
.heading {
  @apply(--heading-style);
}

Mixing Levels

This seemed like a really elegant solution for a while - I got to reuse some existing functionality to solve a new problem! But gradually, we realized that it came with its own problems.

First, folding this into custom properties meant we were mixing levels, in a way that turned out awkward. For example, using a var() in a ruleset didn't do what you might think:

.list {
  --heading-style: {
    color: var(--theme-color);
  };
}
.list > x-component:first-child {
  --theme-color: red;
}
.list > x-component:last-child {
  --theme-color: blue;
}

The above code looks like it sets up the heading styles for all the components in the list, deferring the color property's value to the --theme-color variable, set on the individual components. Instead, tho, it subs in the value of --theme-color on the .list element itself.

This is due to the fact that custom properties don't care what's inside of them. A value, a ruleset, it all looks the same. As far as the CSS engine is concerned, that first rule was:

.list {
  --heading-style: █████var(--theme-color)█████;
}

And so it happily says "hey look, a variable reference! I know how to handle those!" and eagerly substitutes in the value from that element, rather than waiting until it's actually @apply'd.

I am planning on adding a feature to variables that makes them resolve "late", at use-time rather than definition-time, which would "solve" this. But it wouldn't really work: it requires the user to remember that they have to do this special thing for variables in custom property rulesets, and it doesn't solve the problem of wanting to use a late variable in a property meant to be @apply'd. Basically, this just kicks the problem a little further down the road; it doesn't actually solve anything.

This "mixing levels" thing would persist and cause problems in many other different ways.

Setting Custom Properties Inside

An obvious thing you might want to do in an @apply ruleset is set more custom properties, to provide styles for further-nested components. This brings up some new difficulties.

For one, can you set variables intended for the @apply'd element itself? You can do that with normal variables:

/* This is fine */
.foo {
  --one: blue;
  color: var(--one);
}

/* But does this work? */
.foo {
  --one: { color: blue; }
  @apply(--one)? 
}

/* How about this? */
.foo {
  --one: { --two: blue; }
}
.foo > .child {
  @apply(--one);
  color: var(--two);
}

There are use-cases for these. In particular, the last one makes sense; if it didn't work, then when you wanted to style an element in a shadow, you'd have to write all the normal properties in a custom property intended for @apply, then separately write all the custom properties you want to define:

x-component {
  --heading-style: {
    text-decoration: underline;
  };
  --theme-color: blue;
}
/* But this wouldn't work: */
x-component {
  --heading-style: {
    text-decoration: underline;
    --theme-color: blue;
  };
}

That's pretty annoying. But letting this work bring up some interesting issues. For example, you now have to hard against circularity:

x-component {
  --one: { --one: blue; }
  @apply(--one);
  color: var(--one);
}

What does the above even mean? Does the meaning change if I swap the ordering of the lines? We ended up defining that it does work, by carefully ordering the steps: first you do any var() substitution, and let custom properties define more variables, then you @apply substitution, then you repeat var() substitution over again with the new values that the @apply might have brought in. So the above example ends up giving the color property a value of blue. Circuitous and confusing since --one is interpreted in two totally different ways, and kinda annoying/expensive for the CSS engine, but it technically works.

But then we hit a further stumbling block - animations! We want to allow animations to be able to animate custom properties, but also to use custom properties (so you can, for example, animate a background from var(--theme-color-1) to var(--theme-color-2)). The way this ended up working is that the element first does variable substitution and definition as normal, then any animations defined to run on the element get to use the variables so defined, and define new ones, then the properties on the element get variable-substituted again. Sound familiar? Combining animations with @apply meant figuring out precisely how to interleave them, and how many times to re-substitute variables, and it turns out there isn't even a "correct" answer - whatever you choose, you'll exclude some reasonable use-cases.

Interacting With Selectors, and More

But ok, all that's possible to define, even if it's clumsy and confusing in some cases. Now real JS frameworks, in particular Polymer, started using a polyfilled version of @apply, in expectation of it eventually landing in browsers natively. And they ran into problems.

See, the original reason for @apply was to avoid an explosion of custom properties when you wanted to allow arbitrary styling - instead, you just had one single custom property. Much more elegant!

And that works fine, as long as you just want to throw some styles at an element and be done with it. But often, we want more than that. We want to define hover styles, focus styles, active styles. If it's an input, we want to define placeholder and error styles.

With @apply, the user doesn't have access to selectors anymore, so pseudo-classes don't exist. The component author has to reinvent them themself, adding a --heading-style-hover, --heading-style-focus, etc. And it's not uncommon to want to combine these, meaning you also need a --heading-style-hover-focus property, and more. The possibilities explode combinatorially, eliminating our nice "just one property" thing we had going, and ensuring that component users have to memorize the precise set of pseudo-classes each component chooses to expose, and precisely how they name things (is it --heading-style-hover-focus, or --heading-style-focus-hover? Or was it --heading-style_hover_focus for this one? Maybe --heading-style--hf, or --heading-style_hocus?).

This problem pervades the @apply rule, because in general it moves all the various pieces of a style rule one level down:

  • selectors get pushed into property names, losing reordering, syntax, optionality
  • property names get pushed into property values, losing the ability to easily cascade and override things - you can't define a block of properties for the normal element and then easily override just some of those properties for the element when hovered
  • property values get pushed into untyped property values, losing early grammar checking, and causing the problems explained in earlier sections

Ultimately, there's probably ways around all of these issues. For example, we toyed for a while with a "macro" facility that would auto-define the hover/focus/etc variants for you. But these "solutions" would just be reinventing, in a messy and ad-hoc way, the existing features of CSS that we so cavalierly threw away. I became increasingly disillusioned with the feature.

Enter ::part()

At the recent January 2017 CSSWG face-to-face meeting, while discussing these issues with my coworker Shane, we realized that we could avoid all of this by reviving the older ::part() proposal for Shadow DOM. This was a proposal to let the component author "tag" certain elements in their shadow with a "part name", and then the component user could target those parts by name with the ::part() pseudo-element on the component, like x-component::part(heading).

This had all the good stuff: it used CSS things at the correct level, so selectors lived in the selector space (allowing ::part(foo):hover, etc), property names lived in the property name space (allowing the page to define the same property multiple times and let the cascade figure things out), and property values lived in the property value space (var() worked correctly, no complications with animations or circularity, grammar checking works properly).

It also allowed some new useful powers - ::part() only targets the parts exposed by the component itself, not any sub-components it happens to use (unless it actively chooses to surface those sub-parts), which means better information-hiding. (Custom properties, because they inherit, default to the opposite - you have to specifically block them from inheriting into sub-components and styling them.) This also means name collisions are less of a problem - setting a custom property that the component and a sub-component uses can be a problem, but if they both use the same part name, that's just fine.

(We also have a ::theme() pseudo-element in the proposal that does automatically apply down into sub-components, for the rare times when that's exactly what you want to do.)

The one downside of ::part() is that it only works for shadow DOM. If you wanted to use @apply within your normal light-DOM page, you're out of luck. However, I'm okay with this. For one, @apply isn't actually all that useful for light DOM uses - just using normal selectors does the job better. For two, this might encourage more usage of Shadow DOM, which I consider a good result - more encapsulation is more better. (Tho we really need to explore a simple declarative version of Shadow DOM as well, to make simple structural usage of it possible without having to invoke JS.) For three, within a light DOM page we can potentially do even more powerful stuff, like inventing a real mixin facility, or a selector-variables thing, or what-have-you.

There's plenty more space to experiment here, and while it does suck to lose a tool that you might have gotten excited about, @apply really is just quite a bad idea technically. Let's solve these problems correctly. ^_^

Want To Buy: a D&D 5e Rolling App

Last updated:

While pacing the living room this morning, I was thinking about how much 5th edition simplified D&D. Almost every single roll boils down to:

  • d20 (maybe with advantage or disadvantage)
  • stat bonus
  • maybe proficiency

That's it! Attacks, skills, saving throws, they're all the exact same set-up. This is great, as it makes both the players' and the DM's lives easier.

This got me to thinking about a simplified rolling app, taking advantage of this regular structure. Here's how I envision it:

Taking up the lower 2/3 of the screen or so, close to your thumb, is a hexagon with your six stats in each corner. Clicking on one of these does a straight stat roll - d20 + stat - and displays it.

In the center of the hexagon are a number of bonus bubbles, representing common bonuses. To use these, you fling a bubble towards one of the stats, and it does an appropriate roll.

One bonus bubble is always there, labeled "P" for proficiency bonus. You use this for most things: attacks with weapons or spells you're proficient in, and skills or saving throws you're proficient in. Several others can be turned on in settings to represent additional possible bonuses:

  • If you have players make defense rolls (rather than monsters doing attack rolls against the player's AC), a shield bubble, set up with their armor bonus.
  • If the player has the "half proficiency on all skills" bard bonus, a "1/2 P" bubble.
  • If the player has Expertise (double prof on some skills), an "E" bubble.
  • If the player has a magic weapon, or some other attack with a persistent bonus, a sword bubble, set up with their special bonus. (Proficiency is assumed here.)
  • Probably something customizable for misc bonuses the character commonly makes because of a magic item or something.

On the top of the screen are two representations of 2d20, one colored yellow/orange and happy, one purple/black and unhappy. If you click one of these before making a roll, it'll make it with advantage or disadvantage.

In the settings screen, you can input your six stats and your current level (required), and optionally set up the additional bonus bubbles as described above.

There's also a log of the last hundred rolls or so, so recent rolls can be checked if the player makes a mistake or accidentally dismisses the roll results too quickly.

Writing this would be a fun exercise in learning Pointer Events, I think. I might get around to it at some point. If anyone else decides to make it first, please let me know. ^_^

Fantasy World Racial Traits

Last updated:

I've previously riffed on "my elves and dwarves". Translating this over to mechanics, tho, makes me a little uneasy.

I've got a bit of a bug in my bonnet over people attaching too much "inherent flavor" to mechanics. For example, in my current D&D game I'm playing a Bard. The other players in my game have tried to refer to me as such in-game, and I had to correct them - my character isn't a bard by any stretch of the imagination. He's a noble son, raised in the Mondavi family tradition, which involves a mix of physical, social, and magic training, and uses a musical/lyrical focus for their magic to double-dip on those categories. There's no "Bard's College" that he's associated with, and he doesn't play music in taverns for coin or tell stories to crowds (tho he can certainly tickle the ivory in a more upscale party, if he wants to entertain his friends).

I spread this same philosophy to all the classes in the game, tho it does sometimes require one to be a little creative in interpreting things. The point is just that mechanics are nothing more than numbers and rules; they can admit a lot of interpretations, and doing so frees up character-gen in a lot of interesting ways.

I think the same should apply to races. I don't necessarily want to RP as a half-orc just because I want a tough person who's great with big weapons; maybe I just want my character to be a human, or a buff elf. And, overall, this actually works quite well - with a little bit of creative tweaking, all the "races" in the DMG can be interpreted as just traits you're born with / trained into earlier in life, before you started adventuring. Reusing the terminology from some other games, they become a "Background Feat" granted at first level. This has precedent in several systems I'm already familiar with: Iron Heroes had some feats marked as "Background Feats" which could only be taken at first level, and were a little more powerful than a normal feat; Numenera's character creation consists of completing the phrase "I'm an ADJ NOUN who VERBS", where the NOUN is your class and the ADJ and VERB are additional qualities that can represent your upbringing or race.

The only exceptions to this are the handful of "flavor" features that are definitely more biological, not thematic. The half-orc's traits suggest Strong but they also have darkvision, not because they're strong but because that's how half-orcs work. Same for elf trance, or halfling/gnome smallness. These carry little to no mechanical value - they're not used to balance the races - so it's okay to just attach these to the race itself, rather than the trait we extract from them.

So, here's the D&D 5e races, reinterpreted as background qualities that can apply to any race:

Tough

  • Con +2
  • Adv on saving throws against poison
  • Resistance to poison damage

and

  • Wis +1, HP +1/hit die, or
  • Str +2, proficient in light/med armor

Fae-Blooded

  • Dex +2
  • Proficiency with Perception
  • Adv on saving throws against charm

and

  • Int +1, cantrip, extra language, or
  • Wis +1, speed 35, hide when lightly obscured, or
  • Cha +1, learn dancing lights, faerie fire, *darkness*

Wily

  • Dex +2
  • Reroll 1s on attack/damage/skill/saving throw, must take second result
  • Advantage on saving throws against frightened
  • Move thru space of equal or larger creatures

and

  • Cha +1, can Hide behind an equal or larger creature, or
  • Con +1, advantage on saving throws against poison, resistance to poison damage

Skilled

  • +1 to all stats, or
  • +1 to two stats, proficiency in one skill, gain one feat

Elemental

  • Str +2, Cha +1
  • Magic blast (5'x30' line or 15' cone, choose acid, lightning, fire, poison, or cold)
  • Resistance to your blast element

Smart

  • Int +2
  • Advantage on Int/Wis/Cha saving throws against magic

and

  • Dex +1, gain minor illusion cantrip, simple talking with small animals, or
  • Con +1, Expertise in Int(History) checks about magic/alchemical/tech items, tinker to create small devices

Charming

  • Cha +2, +1 to two other stats
  • Advantage on saving throws against charm
  • Proficient in two skills

Strong

  • Str +2, Con +1
  • Proficient in Intimidate
  • Can drop to 1hp instead of 0, once per short rest
  • +1 die on crits

Demon-blooded

  • Cha +2, Int +1
  • Resistance to fire damage
  • Know thaumaturgy, hellish rebuke, and darkness, usable 1/day