Each act of digital production is an opportunity to shrug off the burden of opaque software and to focus instead on the medium of the production itself. Each act is therefore an invitation to embrace writing as digital expression’s central activity: Not gestures, not menus. Not visual interfaces or resource-intensive all-in-one software packages, which embed production knowledge so deeply and invisibly that it can no longer be acted upon or understood.

In digital form, writing is nothing but strings of characters, endlessly and potentially losslessly manipulable under conditions that menu-driven graphical software short-circuits in the name of intuition and user-friendliness. More than any other mode of expression, writing demands revision. There is no hope of getting it right the first time. Or the second.

When writers write and revise by hand to make digital things, they introduce a missing human element and human timescale into the process and the production. The tenuous distinction between natural and computer languages dissolves, revealing itself as a construct that thrives on and maintains fear and ignorance of many core affordances and constraints of the digital medium.

People working in rhetoric, writing, and the humanities should long ago have come to know that. Yet far too many of them, and their students, surrender writing and its demand for sophisticated production knowledge to any interface that promises to make an author’s life easier. And they do so even to the exclusion of a more diverse audience, whose individual members differ widely in corporeal, cognitive, and technological conditions of access.

Software and communications technologies that elevate ease over expertise are the culprits here. Those who teach have an even more pressing responsibility to learn and then engage students with digital approaches and technologies that students themselves would not likely discover independently. Students must be afforded the opportunity to write markup, programs, APIs, and commit messages in the same range of learning situations as they write essays and exams today. They must be encouraged, supported, and even joined by their instructors in failed first efforts. The richest learning experiences reveal how failure and crude initial work transform to something better only through ongoing research and revision.

Lo-Fi Production Technologies

Lo-fi production technologies are stable and free: sometimes free as in beer; sometimes free as in speech; and sometimes, if not chosen only after careful research, free as in puppy. Equally as important, lo-fi technologies are modular and swappable, and can be combined or replaced as needed. They consist of:

  1. Plain text files encoded according to the international Unicode standard
  2. Plain text editors (Notepad++, TextWrangler, Atom, vim, etc.) that support syntax highlighting to make source code easier to read
  3. Human-readable, standards-compliant computer languages for specific tasks such as markup, design, scripting, and programming (Markdown, HTML, CSS, JavaScript, Perl, Python, Ruby, etc.)
  4. Single-media files (image, audio, video) in open formats, preserved in a lossless state but presented in additional forms, sensitive to network conditions, screen density, and reader ability and preference

To better develop digital projects for a diverse, distributed audience and to collaborate with others, lo-fi production should be learned and developed in tandem with an essential stack of supporting technologies:

  1. Unix-like operating systems, particularly the many variants of Linux and BSD that comply to large degree with the group of IEEE standards known as the Portable Operating System Interface, or POSIX
  2. Version control, sometimes called source or revision control, for recording the changes to a lo-fi project over time and coordinating the contributions of multiple people collaborating simultaneously in parallel; Git is one of the more well known version control systems, but there are many others
  3. Package managers for installing any necessary software at the operating-system level and for installing libraries and frameworks in any scripting or programming languages in use for a project in (e.g., npm for JavaScript-based Node.js, or RubyGems for Ruby)
  4. Standardized TCP/IP-based network protocols, ideally in encrypted forms, including mainstays such as HTTPS and SSH; most Unix-like operating systems ship with all of the developer tools necessary for spinning up network protocols for local development behind a network’s firewall, instead of on the open Internet

Despite their humble, decades-old base technology (plain text and Unix, basically), thoughtful assemblages of lo-fi technologies are remarkably hi-fi. They are likely what power your favorite smartphone apps and make possible web applications that offer the same rich interaction as found in traditional desktop applications.

Whether or not you create your own digital projects using lo-fi technologies, you most certainly benefit from, consume, and make use of them. Their absence is more obvious when you run into trouble: a web browser complaining about a missing plugin, an email attachment that can’t be opened on a smartphone, or an older file that cannot be edited in the latest release of the software used to create it originally. Sometimes trouble presents itself as a piece of software that students must use in a specific version in order for classroom instruction to proceed as planned; the newer version might require a new set of instructional materials, next time around, just to keep current with a different interface.

Lo-Fi is LOFI

“Lo-fi” describes a limited set of production technologies that people creating digital work should strive to command. As an acronym, LOFI outlines four principles of digital production that are essential for the advancement, extension, and long-term preservation of accessible digital work:

  • Learning: Yes, all technologies and all acts of production require learning. Obviously. But lo-fi learning is deliberately sought out with each act of production. That is the only way it can be sought out, as the full range of problems comprising any consequential act of production is unlikely to be satisfactorily addressed by the content or timescale of any one course, book, or tutorial. Learning must both scale and transfer from individual lines of code to entire lifespans of digital production approaches.
  • Openness: Direct engagement with source code and media elements is a hallmark of lo-fi production. All components of a digital work must be available for inspection, revision, and extension outside the scope of any one device, platform, or piece of production software and any one creator. Like learning, openness must scale across time and space, including especially customization and repurposing by readers and end-users.
  • Flexibility: Lo-fi production technologies are inherently limited: there’s nothing much to click or tap around on and discover. Flexibility emerges from the thoughtful application of lo-fi technologies, not from a feature set embedded in an interface. As primarily acts of writing, lo-fi production establishes research and imaginative problem-solving as the central means to reach diverse audiences equipped with an endless variety of conventional, mobile, and assistive devices.
  • Iteration: The hefty investment in learning something new is rivaled only by the investment of time in creating the first draft of a digital project, regardless of how far short it falls of the idea that originally inspired it. This is the most elusive element of digital creation: revision without penalty. Although difficult to recognize on shorter timescales, a key benefit of lo-fi production techniques, supported especially by version control, is slow and steady improvement of existing work as well as experimentation and parallel, alternate approaches to production. (Emphasis on slow and steady; there is little to be gained from the adjective rapid that often modifies iteration.)

Whether or not you choose to embrace lo-fi technologies, lo-fi principles are a useful heuristic to evaluate the production technologies that you bring to your own digital work, and to that of your classroom if you have one. Of all the lo-fi principles, iteration is the most consequential: It emerges from the sustained pursuit of the other three—learning, openness, and flexibility—even as it ensures that the other three remain an integral part of digital production.


1. Software is a poor organizing principle for digital production.

“What program do you use?” is a question I often get about the slides I use to present my work. I have concluded that the proper answer to the question is to counter-suggest the asking of a different question, “What principle do you use?” John Maeda, The Laws of Simplicity

It is alarming when software, commercial or otherwise, comes to signify entire digital genres. Compare the number of results in a search engine for PowerPoint best practices versus slideshow best practices. The results suggest that vendor lock-in has as much of a grip on how people talk about production as it does on what actually gets produced.

Consider a software-independent, lo-fi alternative to PowerPoint: Eric Meyer, CSS guru and design wizard, developed and released into the public domain a Simple Standards-based Slide Show System (S5). Meyer’s system uses the lo-fi web languages of structural XHTML, media-specific CSS, and JavaScript to deploy slideshows. Unlike PowerPoint slideshows, which require either the PowerPoint software itself or the Microsoft PowerPoint Viewer for optimal viewing, S5 slideshows function in any standards-compliant web browser.

But S5 is just a start. Even more feature-rich lo-fi projects have been developed, including Hakim El Hattab’s Reveal.js, a slideshow framework that leverages CSS3 animations and an advanced JavaScript API on top of the same basic web languages as Meyer’s S5 to produce slideshows that are in many ways superior to PowerPoint. For example, Reveal.js supports multiplexing, which allows audience members to pull up the slideshow on their own devices. The slides advance automatically on each device as the speaker moves through the slideshow: a huge accessibility boost over forcing an audience to squint at distant projected slides. Additionally, unique URLs for each slide make it possible for someone providing live social-media coverage of a talk to share pointers to specific slides. Formal citation benefits as well.

Lo-fi systems like S5 and Reveal.js are well suited to the rhetorical situation of the slideshow, which shares with all digital productions a defining characteristic: uncertainty. Slideshows are commonly projected on unfamiliar computers (often, it seems, with dubious maintenance records) that a speaker might have access to only shortly before speaking. Will that computer have PowerPoint installed? The right version of PowerPoint, at that? If not, will the logged-in user have sufficient privileges and a network connection to download the PowerPoint viewer? If all else fails, will a reasonably competent IT person be present to step in and help?

Such problems, rooted in the inflexible digital materiality of the PowerPoint file itself, are easily avoided by lo-fi alternatives like S5: Even if the computer runs an outmoded browser (and what computer doesn’t have a browser installed?), S5, Reveal.js, and other well constructed lo-fi slideshows will operate more or less as planned while remaining editable in any text editor available. Speakers can even keep their slideshow and a portable version of Firefox on a USB drive, should Internet access be sketchy or fail outright.

As that simple example shows, looking beyond the apparent inevitability of software like PowerPoint brings the aims of the digital genre itself into focus. It invites a more flexible, rhetorical approach to production than focusing on the features and limitations of a given piece of software.

In the classroom, software should therefore not be selected based on its high-end features or the size of its installed user-base in corporate settings. Instructors should resist assuming that they better prepare students for the workforce by teaching exclusively the most commonly used word processor or page-design software whose interfaces are wildly unstable and intrude upon thinking deeply about production problems. Those who teach should instead lead students in working through approaches and technologies that foreground the rhetorical situation of digital production, especially the uncertainty that software like PowerPoint attempts to paper over. Rhetorically focused instruction establishes familiarity with the affordances and constraints of open standards and formats and admits of the many uncertain and unknowable factors that determine how a digital artifact will be accessed and displayed. Yet inspiring examples like Reveal.js demonstrate how rewarding and liberating it is to learn to command open technologies through written language.

2. Expression should not be trapped by production technologies.

Every platform tells you that it’s the best, that it is worthy of your time and attention. But there’s always another platform. Karen McGrane, Content Strategy for Mobile

Too many software programs create roach motels for content and information: The data checks in via File > Import, or a file-upload dialogue box on a web application, but it never checks out. Such digital artifacts—the PowerPoint, the PDF, the word-processor document—are only marginal improvements over the entrapped quality of analog, print information. In many ways, such as non-negotiable dependence on a specific piece of software to view the artifact, software programs are actually steps backward from the comparatively open access that books and other printed matter provide.

The author-privileged focus of closed, roach-motel formats and WYSIWYG software is explicit in the latter's acronym: What YOU See is What YOU Get. As though YOU, the author, were the only one who mattered in the digital rhetorical situation. If it looks good for me in Dreamweaver or my desktop browser, so the logic goes, it must look good everywhere for everyone. At a time when screens range between postage-stamp-sized wearables and 88-inch ultra-high-definition televisions, it is lunacy to assume that what the creator of a work sees is what everyone, or really anyone, sees. The tireless pursuit of a 1:1 match between what appears on screen for an author and what's received, eventually, by a reader is a pernicious artifact of print culture deeply embedded into the interfaces of even early page-design software.

People creating digital work for others should be far more concerned about what the audience gets than what it sees. What audiences should get is flexible, open formats. Writing: the luxury of well-crafted source code that the reader’s own device will render, to the greatest extent of its capability. The Web and even the Creative Commons are efforts steeped in the promise of openness. But a Creative Commons (CC) license that allows for derivative works of, say, a Web-available PDF is an oxymoron at best: just try to extract an archive-quality image from a PDF file, or to listen for coherence as an audio screen-reader meanders unpredictably through a multi-column document. In those cases, the CC license emphasizes gestures of openness over careful preparation of digital artifacts with a genuine capacity to support derivative works, or even basic device- and ability-neutral access.

To make digital projects genuinely friendly to derivative works, it needs to be maximally flexible (cut and paste does not count). A version-control repository containing the lo-fi elements of plain-text and single-media files, and their history, is the most generous expression of flexibility. It recognizes that an unknowable group of users and their devices should be able to, one day, rework the content of different media elements. The repository serves to recognize that there may be other platforms and production approaches in the future. The digital creator’s responsibility is to reference and orchestrate elements that can be accessed in a combined or piecemeal fashion: only then is a CC derivative-works license viable, or even honest.

Any given digital artifact needs to be constructed not as a final resting place for an idea or some information, but as a pause in a stream of further, unfettered access and revision. A web page listing an organization’s members’ names and email addresses, for example, can be made more open through the use of microformats. Rather than cutting and pasting the contents of the page, or returning each time the page’s information is needed, a user can detect the presence of the h-card microformat with a parser for microformats. It would then be possible to import some or all of the membership’s contact information directly into her own email address book. Should electronic address books become microformat-aware, the address book could query the URL containing the contact information and update entries automatically.

Preparing rich, user-manipulable data using techniques such as microformats, which are now part of the microdata portion of the HTML specification, is a unique, lo-fi method to structure and openly share content. But dependence on WYSIWYG software has kept people largely ignorant of data serialization and semantics supported by languages like XML and JavaScript Object Notation (JSON).

Digital works should long outlast the software that played a role in their creation. Insisting on open standards and formats, not software packages, from the moment of authorship to the moment of reader access is the only way to make that happen. People creating digital work should value the command of lo-fi technologies at the code level: not in service to machines, but in kindness to other human beings whose specific technology access and physical ability are ultimately unknowable.

There are any number of venues to consult for authoritative guidance on language and format standards. Standards for languages are openly available from the W3C (e.g., XML, XHTML, CSS) and ECMA (most notably ECMAScript, the standard version of JavaScript). There are other standards, from character sets at ISO to file formats and MIME/Internet media types at IANA. Regarding choices of single-media files, it is worthwhile to consult a library-backed resource such as the United States Library of Congress’s developed and adopted standards.

But few people doing lo-fi production will need to consult the specifications directly in the normal course of production. Community-maintained documentation, such as the Mozilla Developer Network and, are better organized and presented with essential information relevant to many production problems.

3. Value research and learning over intuition and reflex.

When one steps back from the marketplace, things can be seen in a different light. While time passes on the surface, we may dive to a calmer, more fundamental place. There, the urgency of commerce is swept away by the rapture of the deep.... Form, structure, ideas, and materials become the object of study. Brenda Laurel, Design Research: Methods and Perspectives

Acts of digital production should establish conditions for learning beyond pointing and clicking through an arbitrary set of menus and dialog boxes. Point-and-click, GUI-driven WYSIWYG production approaches are not extensible: Beyond exposure to certain visual conventions, learning to navigate Microsoft Word or Google Docs has little transferability to future efforts even in other GUI-driven software like Photoshop, not to mention essential lo-fi languages like HTML, CSS, and JavaScript.

Expertise is the price to be paid for intuition and reflex, the two central benefits of well-designed GUI-driven production software. Together, intuition and reflex make for easy software that’s fun to use. There is no question about that. And when it comes to apps for personal use, from email and messaging to social networking and gaming, software absolutely should be intuitive and provide the kinds of interfaces and visual cues that make for reflexive, unstudied use, for users coming from more than a handful of particular cultural and socioeconomic backgrounds.

The problem is that the market for GUI-driven software has conditioned people to expect the same ease everywhere. To be sure, there should be no need to write source code just to check email or to book a restaurant reservation online. But that does not mean that no occasions exist when writing source code is absolutely necessary. A lo-fi approach rejects intuition and reflex in exchange for the uncomfortable uncertainty and time-consuming struggles of research and learning. Intuition and reflex are only for today, for the person making something. What is being made, and for whom, are always different problems, project to project. Research-driven lo-fi production is not just about investigating the how of production that visual interfaces embed and make intuitive. Lo-fi production directly addresses essential audience-driven concerns of digital creation that are also the most stable and sustainable: the what and the why, under the human and technological constraints of for whom.

Lo-fi methods open access to the languages and methods of production obscured by and embedded in visual interfaces. Production approaches anchored to open, standardized languages have a longer shelf-life than those embedded in GUI-driven software. The essential properties of HTML 4.01 in 1997 are identical to HTML5 in 2016, but there is no penalty or accessibility cost associated with writing HTML 4.01 today. The same cannot be said for Microsoft Office 97 and its current version: As of December 2015, there are over five million Google results for how to open a Word 97 document. Word 97 knowledge is as defunct as the objects that it produced, which more than a handful of people have struggled to access.

Although languages, like software, are subject to change in future releases, languages retain their essential character version to version. So too do the essential text-based interfaces of command-line applications on Unix-like operating systems: cd; ls will always change to your home directory and then list its contents. The markup languages SGML, HTML, and XML look and behave very similarly, for another example, despite the fact that SGML was developed in the 1960s and standardized in 1986, and XML in 2000. To learn any one of those languages is to have learned the others.

Or more accurately, to learn any one markup language is to learn about the general idea of markup languages. It is foolish and certainly difficult to confidently write more than a few lines of HTML without referring to a solid reference, such as the HTML element reference maintained by the Mozilla Developer Network. Consulting and researching an element reference does more than explain what to type: It opens up ways of thinking about individual elements and their histories as well as their ongoing development. Research transforms production, making it as much an object worthy of study as the content it’s meant to convey.

Learning builds on research, but the deeper learning of the greatest value requires stability. The stability of computer languages is due, in part, to common ancestors. For example, there are few scripting or programming languages that are not at least influenced by C. Learning one language on a family tree is inherently preparation to learn others. Even languages that are essentially unrelated (say, CSS and PHP, or HTML and Ruby) share much of the same meta vocabulary and concepts: lines of styles in CSS are terminated with a semicolon as are lines of PHP code. Nested tags in HTML resemble statements that are nested in Ruby. Prepared with that sort of vocabulary, people engaged in lo-fi production can develop mental models for how languages operate in conveying a particular idea in service to diverse audiences. They can leverage exacting Google searches to research and solve a wide range of production problems. They can, with the time and patience, achieve the highest levels of thinking valued in the humanities and other academic disciplines: theory, reached via abstraction and contemplation based on studied, deliberate experience.

4. Design first for the most constrained users and devices.

Use progressive enhancement so people can access your site’s content even on a device that doesn’t support certain features. Optimize so it downloads fast. Insert media query breakpoints where it’s appropriate for the content, rather than based on widths of common devices. Anna Debenham,Testing Websites in Game Console Browsers

There is no better way to lose the good will of audience members than to bombard them with a series of messages demanding the installation or upgrade of software and plugins or, worse, to announce that their equipment (and, perhaps by extension, financial status or physical ability) is wholly inadequate and beyond toleration. Worse still is no message or warning at all: just a blank screen or hopelessly malfunctioning digital artifact.

A poor technological choice that denies access to anyone, for any reason, is ultimately a rhetorical problem—particularly when there are lo-fi technologies, like web standards, that address issues of access by design. Lo-fi production approaches afford an opportunity to raise our expectations of one another and to research and assume responsibility for all of the rhetorical concerns that comprise the digital medium—not just those that are easy, obvious, or convenient.

Lo-fi production technologies provide a foundation for delivering artifacts that are editable everywhere, and accessible everywhere, too. But they still require a thoughtful approach: designing first for the most constrained users and devices. Without exception. Accessibility is not some drudgery to be filled in only after the rest of the work has been done.

People with the most sophisticated whiz-bang production knowledge, or the most expensive GUI-driven software, are also typically privileged to enjoy the fastest computers, the most recent generation of smartphones, the highest-resolution displays, the speediest network connections, and the most generous mobile data plans. But that is not the way most of the world is equipped. In acts of production, it’s better to assume that none of the rest of the world is equipped that way.

Make a habit of producing a single artifact across as many different computers and devices as you can get access to. Nothing will make you rethink your production approach more than 30 minutes on a dilapidated hotel-lobby computer used mostly for printing boarding passes. Throw a different operating system into the mix by running Linux off of an external hard drive on your primary computer. Make sure it doesn't have your usual typefaces or software, then get to work. Make every word and every line of source code count. Make every byte of a media file work hard to justify the time and resources necessary to download it. And if it cannot, get rid of it.

Once you have a world-accessible draft, test it everywhere: the public library, the mobile-phone store, the big-box retailer’s electronics department. And not on the really expensive stuff, either. Choose the cheapest laptop loaded with the most awful bloatware. The mobile phone with the smallest, ugliest little display. Disable, if you can, the LTE internet connection to see the 2G world of the people who’ve already burned through their pitiful three-gigabyte LTE data allotment for the month, perhaps thanks to a monstrous PDF file someone sent as an email attachment without thinking for a moment what consequences that might have. Get a real, lived sense of just what it is that other people might see when they access the thing that you’ve created.

Producing accessibile digital artifacts is neither an end in itself nor a testament to the supremacy of technology over human concerns. Rather, accessible artifacts arise from the equal application of care and attention to detail that is traditionally expected of content. That means accepting the gift that comes from designing first for the most constrained devices and users. Constraint is not a limitation on creative expression; it is a baseline experience that demands as much as possible from as little as possible. Constraints invite careful investigation as to what matters most, whether in terms of user-interface design or loading media elements.

It is from that solid baseline that additional features and functionality can be added, in an unobtrusive way that benefits those who are able and can afford to experience them, without penalizing those who cannot. Readers of accessible, lo-fi artifacts will appreciate not being told what they must do (even if they are left blissfully, mercifully ignorant of the enhanced coolness they may be missing out on); and people producing well-researched digital work can develop content and ideas with far greater confidence in ethical audience access than WYSIWYG software will ever provide.

5. If a hi-fi element seems necessary, keep researching until you conclude that it isn’t.

We do not have an interoperable Web. What we have is a glut of proprietary, closed, and protected stuff. While it’s sophisticated and interesting sometimes, it goes against the heart of what we came here to build in the first place: an accessible, interoperable Web for all. Molly Holzschlag,Web Standards 2008: Three Circles of Hell

It used to be necessary to employ Flash to handle audio and video or present web typography beyond commonly installed system fonts. But that’s no longer the case. HTML5’s <audio> and <video> tags are now widely supported, as is the CSS @font-face property.

Of course, that those technologies exist is very different from understanding their features and limitations, not to mention exactly how widely supported they really are on current and legacy browsers. (And if you want to lose all hope and an afternoon, read up on the state of CODECs and media containers for delivering video files across all browsers. It’s depressing. But the thing that will most help the situation improve is further research and involvement from a larger, more diverse group of people.)

If you’re using a hi-fi piece of production software that embeds videos in HTML, it may do nothing more than ask for the location of your video file. It’s not going to necessarily alert you to issues users might encounter, or provide fallbacks for users with older or less capable browsers. And if the software does provide a fallback, it might not be the kind of fallback you want to present. It might just be another error message and notice to upgrade.

Those kinds of concerns illustrate why lo-fi production is so dependent on research, and why GUI-driven software that promises to deliver one-click solutions to those kinds of problems should be treated with suspicion.

It doesn’t take much research to find hi-fi production technologies. They’re well marketed and have plenty of brand-name recognition. They come pre-installed, often in broken or incomplete form, on consumer PCs, and are likely on many of the machines in the computer labs at schools and universities around the world. They’re also on the computers found in most office cubicles, which more than any other scene of computing seems to be the primary inspiration for both campus and personal computing.

Ask someone why they chose a particular technology for a project, and you will often find one little feature driving the decision. It’s astounding, for example, to discover that people choose to set up WordPress to run a small website simply because they wanted a way to repeat the navigation across the four or five pages that made up the site. For that one feature, they pay the tax of securing a database connection and applying software updates for the life of the project, lest the infamous pharma hack or one of its many variants compromise the site. Had such a small site been built with basic HTML, or a static site generator like Jekyll or Wintersmith, no updates beyond those routine to the web server itself would likely be needed.

On its face, something like WordPress looks lo-fi. WordPress is all open source, all built on a simple setup of lo-fi technologies: the LAMP stack, or Linux, Apache, MySQL, and PHP. But it’s actually the MySQL database that invites a closer look and further research. A database might be lo-fi on its surface, but a database is best employed only under one of two conditions, usually both: first, there must be far more records than can be reasonably handled by flat files (that is, a database record per page of a website, rather than an HTML file per page). Second, database-like things must routinely be done to those records: sorting, counting, joining, and so on, in the context of more read–write operations than can be handled by flat files. A five-page website that’s infrequently updated does not fit that bill.

The kinds of research required for lo-fi production is always aimed at a particular problem. Maybe it’s how to handle templating in a lo-fi way: If a solution includes a database or an oversized code library, more research is needed. Or maybe, as in the earlier CSS @font-face example, it’s the problem of loading a custom typeface onto a web page. Which opens up questions like Why that typeface? And then How to load the custom face? And that in turn should open up an investigation to what the consequences are, both in terms of legality (font licensing is a particularly thorny issue) and user experience. Typefaces eat up bandwidth like any other media asset. Is it worth the potential expense, on metered connections, or the wait to load the typeface? Especially when adequate, if not perfect, typefaces may be readily available on the reader’s device? Then there are other considerations: Certain typefaces, particularly icon fonts, map letters and numbers to particular icons for ease of use, which may result in weird accessibility issues for users of screen-reading software. On certain displays, typefaces that have not been manually hinted may look just terrible, undercutting the very aesthetic that motivated loading a custom typeface to begin with.

In lo-fi production, every single feature and consequence is a potential avenue for research. Nothing, not even something as low-level as a typeface, should be mindlessly dropped in and glossed over. And particularly when only a hi-fi solution seems viable, there is always the need to push a little harder on the actual production problem, and research accordingly.

6. Version control. Always. Everywhere. For everything.

Having the entire history of your project available to you is the key benefit to any version control system. Travis Swicegood, Pragmatic Version Control Using Git

Next to the plain-text editor, there is no more important piece of software in a lo-fi stack than a version control system. It is the piece that makes experimentation possible, reduces the friction of collaboration across time, space, and platforms, and makes learning and the sorely lacking component of revision a central part of digital production.

With minor variations, version control systems (VCSs) organize projects into repositories. The repository is both the files that make up the project, and their history. In some VCSs, that history is limited to a certain number of most-recent changes; on others, the repository’s history goes back to the very beginning of the project.

Git is probably the most widely known VCS, thanks in no small part to GitHub, a code-hosting site based on Git that is in no way required for using Git itself. But there are many other version-control systems available. The best of them share with Git one primary feature, and that is that they’re fully distributed: Any one copy of the repository is independent from any other copy. That means work can go on uninterrupted even if you’re without an Internet connection, and it frees you to work however you please without having to make all of your work public. But when work is ready to be shared publicly or with a team of collaborators, the VCS steps in to assist in sharing that work, rather than inviting the clumsy intrusion of email attachments or generic cloud-storage services.

One of the non-negotiable qualities of lo-fi production is that a single project will be split over multiple files. Even a basic web page, for example, has an HTML file that might load multiple different CSS and JavaScript files as well as different images and other media, each in their own files. There are many benefits to that, although the obvious drawback is that a single change, such as rewriting the copy for a headline and restyling it in CSS for better readability, requires changes to multiple files. Using the file system as version control by creating a series of files like index-old-01.html, index-old-02.html, and so on, quickly falls apart when files need to reference one another through URLs or load or include statements. A good version control system takes that burden away from the file system, while having no problem at all with recording a single change across multiple files.

But version control isn’t just for recording changes. Many version control systems act as development platforms not only to record changes, but to act on them. Git, for example, includes the ability to run scripts before and after certain actions. Pushing changes to a remote server can trigger a script that moves the updated files into place on a world-viewable web server. Rather than messing around with error-prone, bandwidth-hungry FTP software, a simple git push from the command line is all it takes to make the latest version of the site world-available. Projects like Capistrano add in their own advanced functionality on top of Git to handle more complex development stacks, which might include databases and other services that require configuration, maintenance, and restarts as part of deploying a project to a live web server.

Version control is also a huge asset to learning new languages and frameworks. Habitually creating repositories for working through examples in books and tutorials make it much easier to spot changes that might not directly be mentioned by the author. The use of branches also support exploratory changes that deviate from the book or tutorial’s advice. And that’s often where deeper learning can happen.

For those who teach, version control represents an essential, missing part of digital pedagogy. What matters in student work is not what the project is at any given moment from a first draft to a final project submission. What matters is what it was, and what it next became. In between those two points in time is where learning should take place. A student coming for help with a broken project that used to be working can, with the assistance of version control, trace the exact moment in time it ceased to function. The instructor in turn learns of a key piece of teaching that might have failed, or an object lesson to teach from in the future.