• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Remember Lenny

Writing online

  • Portfolio
  • Email
  • Twitter
  • LinkedIn
  • Github
You are here: Home / 2020 / Archives for September 2020

Archives for September 2020

Why is video editing so horrible today?

September 15, 2020 by rememberlenny

In the last three months, I have done more video post-production than I have done in the past 12 years. Surprisingly, in these years, nothing seems to have changed. Considering how much media is now machine analyzable content, such as audio and visual, I’m surprised there aren’t more patterns that make navigating and arranging video content faster. Beyond that, I’m surprised there isn’t more process for programmatically composing video in a polished complimentary way to the existing manual methods of arranging.

In 1918, when the video camera was created, if you filmed something and wanted to edit it, you took your footage, cut it and arranged it according to how you wanted it to look. Today, if you want to edit a video, you have to import the source assets into a specialty program (such as Adobe Premiere), and then manually view each item to watch/listen for the portion that you want. Once you have the sections of each imported asset, you have to manually arrange each item on a timeline. Of course a ton has changed, but the general workflow feels the same.

Should Critics and Festivals Give Editing Awards? Yes, and Here's Why |  IndieWire
Real life photo of me navigating my Premiere assets folders

How did video production and editing not get its digital-first methods of creation? Computing power has skyrocketed. Access to storage is generally infinite. And our computers are networked around the world. How is it that the workflow of import, edit, and export take so long?

The consumerization of video editing has simplified certain elements by abstracting away seemingly important but complicated components, such as the linearity of time. Things like Tiktok seem to be the most dramatic shift in video creation, in that the workflow shifts from immediate review and reshooting of video. Over the years, the iMovies and such have moved timelines, from horizontal representation of elapsed time into general blocks of “scenes” or clips. The simplification through abstraction is important for the general consumer, but reduces the attention to detail. This creates an aesthetic of its own, which seems to be the result of the changing of tools.

Where are all the things I take for granted in developer tools, like autocomplete or class-method search, in the video equivalent? What is autocomplete look like in editing a video clip? Where are the repeatable “patterns” I can write once, and reuse everywhere? Why does each item on a video canvas seem to live in isolation from one another, with no awareness of other elements or an ability to interact with each other?

My code editor searches my files and tried to “import” the methods when I start typing.

As someone who studied film and animation exclusively for multiple years, I’m generally surprised that the overall ways of producing content are largely the same as they have been 10 years ago, but also seemingly for the past 100.

I understand that the areas of complexity have become more niche, such as in VFX or multi-media. I have no direct experience with any complicated 3D rendering and I haven’t tried any visual editing for non-traditional video displays, so its a stretch to say film hasn’t changed at all. I haven’t touched the surface in new video innovation, but all considering, I wish some basic things were much easier.

For one, when it comes to visual layout, I would love something like the Figma “autolayout” functionality. If I have multiple items in a canvas, I’d like them to self-arrange based on some kind of box model. There should be a way to assign the equivalent of styles as “classes”, such as with CSS, and multiple text elements should be able to inherit/share padding/margin definitions. Things like flexbox and relative/absolute positioning would make visual templates significantly much easier and faster for developing fresh video content.

Currently I make visual frames in Figma, then export them because its so much easier than fumbling through the 2D translations in Premiere

I would love to have a “smarter” timeline that can surface “cues” that I may want to hook into for visual changes. The cues could make use of machine analyzable features in the audio and video, based on features detected in the available content. This is filled with lots of hairy areas, and definitely sounds nicer than it might be in actuality. At a basic example, the timeline could look at audio or a transcript and know when a certain speaker is talking. There are already services, such as Descript, that make seamless use of speaker detection. That should find some expression in video editing software. Even if the software itself doesn’t detect this information, the metadata from other software should be made use of.

The two basic views in Zoom. Grid or speaker.

More advanced would be to know when certain exchanges between multiple people are a self-encompassed “point”. Identifying when a “exchange” takes place, or when a “question” is “answered”, would be useful for title slides or lower-thirds with complimentary text.

Descript will identify speakers and color code the transcript.

If there are multiple shots of the same take, it would be nice to have the clips note where the beginning and end based on lining up the audio. Reviewing content shouldn’t be done in a linear fashion if there are ways to distinguish content of video/audio clip and compare it to itself or other clips.

In line with “cues”, I would like to “search” my video in a much more comprehensive way. My iPhone photos app lets me search by faces or location. How about that in my video editor? All the video clips with a certain face or background?

Also, it would be nice to generate these “features” with some ease. I personally dont know what it would take to train a feature detector by viewing some parts of a clip, labeling it, and then using the labeled example to find the other instances of similar kinds of visual content. I do know its possible, and that would be very useful for speeding up the editing process.

In my use case, I’m seeing a lot of video recordings of Zoom calls or webinars. This is another example of video content that generally looks the “same” and could be analyzed for certain content types. I would be able to quickly navigate through clips if I could be able to filter video by when the video is a screen of many faces viewed at once, or when only one speaker is featured at a time.

All of this to say, there is a lot of gaps in the tools available at the moment.

Filed Under: video Tagged With: film, post production, Programming, video editing

Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

September 8, 2020 by rememberlenny

See this video summary at the bottom of the post, or by clicking this picture.
Important update: The statement that Google Fonts only displays a single variable font axis was wrong. Google Fonts now has a variable font axis registry, which displays the number of non-weight axes that are available on their variable fonts. View the list here: https://fonts.google.com/variablefonts

Variable fonts are a new technology that allows a single font file to render a range designs. A traditional font file normally corresponds to a single weight or font style (such as italics or small caps). If a user users a bold and regular font weight, that requires two separate font files, which respectively correspond to each font weight. Variable fonts allow for a single font file to take a parameter and render various font weights. One font file can then render thin, regular, and bold based on font variation settings used to invoke the font. Even more, the variable font files can also render everything between those various “static instances”, allowing for intrigue expressibility.

At a high level, variable fonts aren’t broadly “better” than static fonts, but allow for tradeoffs that can potentially benefit an end user. For example, based on the font’s underlying glyph designs, a single variable font file can actually be smaller in byte size than multiple static font files, while offering the same visual expressibility. While the size does depend on the font glyph’s “masters”, another beneficial factor is that a single variable font requires less network requests to accomplish a wide design space.

Example how Figma canvas is rendering “Recursive” variable font with various axis values.

Outside of technical benefits, variable fonts provide an incredible potential for design flexibility which isn’t possible with static instances alone. The example of a variable font and font weight was given above, but actually a variable font can have any number of font axes based on the designers wishes. Another common font axis is the “slant” axis, which allows a glyph to go between being italics and upright. Rather than being a boolean switch, in many cases, the available design space is a range which provides for potential around intentional font animation/transitions as well.

Key terminology:

Design space: the range of visual ways which a font file can be rendered, based on the font designers explicit intention. Conceptually, this can be visualized as a multidimensional space, and the glyph’s visual composition is a single point in the space.

Variable axis: A single parameter which can be declared to determine a fonts design space. For example, the weight axis.

Variable font settings: The compilation of variable axis definitions, which are passed to a variable font and determine the selected design space. 

Static instances: An assigned set of font axis settings, often stored with a name that can be accessed from the font. For example, “regular 400” or “black 900”.

Importantly, variable fonts are active and available across all major browsers. Simply load them in as a normal font, and pass the variable-font-settings css property to explicitly declare the passed variable axis parameters.

Google Font’s variable fonts filter.

As you can see here, a normal font weight declaration or a font style declaration would look like this, but a variable font style definition allows for a wider range of expression.

Google Fonts is currently a major web font service that makes using variable fonts extremely easy. Their font directory allows for filtering on variable fonts, and the font specimens pages allow you to sample the font’s static instances as well as the font weight variable axis. While Google Fonts serves variable fonts, they are currently limiting their API to single font weight axes.

Inter font’s weight and slant axis

One popular font, beloved by developers and designers alike is Inter, which was designed by Rasmus Andersson. Inter contains a weight access, as you can see from the Google Fonts specimen page. If you go directly to the Inter specimen website, you can actually see that it also contains a second font axis – the slant axis, which was mentioned above. 

From the specimen page, you can also see that assigning the weight and slant can allow for use cases that make it invoke different feelings of seriousness, casualness, and legibility. While changing the font weight can make it easier to read, based on the size of the font, it can also be combined with colors (for example in dark mode) to stand out more in the page’s visual hierarchy.

Another font to show as an example is Stephen Nixon’s Recursive. Recursive can also be found on Google Fonts, but again by going to the font’s own specimen page, you can experiment with its full design space. Recursive contains three font axes that are unique: expression, cursive and mono. Additionally, as you can see, certain glyphs in the font will change based on the combined assigned font axis values. One example is the lowercase “a”, as well as the lowercase “g”.

Example of the “a” and “g” on the Recursive font’s glyph changes

For Recursive, some of the font axes are boolean switches, as opposed to ranges. The font is either mono or not. Also the range values can be explicitly limited, such as with the cursive axis which is either on/off/auto.

Side note – with Inter, one thing to note that was glanced over is how changes in the font’s weight axis actually result in changing the width of the font glyph. For Recursive, which has a “mono” axis, the weight is explicitly not meant to adjust the width of a font glyph. While not found in either of these two fonts, a very useful font axis which is sometimes found is the “grade” axis, which allows for glyphs to become thicker, without expanding in width.

All of this is a quick overview, but if you are interested in learning more, do check out TypeNetwork’s variable font resource to see some interactive documentation.

Beyond the browser, major Adobe software products as well as Sketch now renders basic font axis sliders to customize variable fonts. As I switch between code and design software, I was surprised to find that Figma was one of the few design softwares that wasn’t compatible with variable fonts and their variable font settings. That being said, they do have an incredible plugin API which allows someone to potentially hack together a temporary solution until they have time to implement them fully.

In the next blog post, I’ll go into how Figma’s plugin architecture lets you render variable fonts as SVG vector glyphs.

Filed Under: frontend, programming Tagged With: figma, typography, variable fonts

React Figma Plugin – How to get data from the canvas to your app

September 2, 2020 by rememberlenny

I had much too hard of a time groking the Figma Plugin’s documentation, and thought I would leave a note for any brave souls to follow.

Figma has a great API and documentation around how to make a plugin on the desktop app. When writing a plugin, you have access to the entire Figma canvas of the active file, to which you can read/write content. You also have quite a lenient window API from which you can make external requests and do things such as download assets or OAuth into services.

All of this being said, you are best off learning about what you can do directly from Figma here.

If you are like me, and working on a plugin, which you have decided to write in React, then you may encounter the desire to receive callback events from the Figma canvas in your app. In my case, I wanted the React application to react to the updated user selection, such that I could access the content of a TextNode, and update the plugin content accordingly.

To do this, I struggled with the Figma Plugin examples to understand how to access data from the canvas and into my app. The Figma Plugin examples, which can be found here, have a React application sample which sends data to the canvas, but not the other way around. While this is seemingly straight forward, I didn’t immediately absorb the explanations from Figma Plugin website.

In retrospect, the way to do this is quite simple.

First, the Figma Plugin API uses the Window postMessage API to transmit information. This is explained in the Plugin documentation with a clear diagram which you can see here:

The first thing to note from this diagram is the postMessage API, which I mentioned above. The second thing is that the postMessage API is bi-directional, and allows for data to go from the app to the canvas, and vice-versa.

Practically speaking, the React Figma Plugin demo shows this in the example

This is part of the React app, which is in the ui.tsx file

In the example, the postMessage API is using the window.parent object to announce from the React app to the Figma canvas. Specifically, from the plugin example, there are two JavaScript files – code.ts and ui.tsx, which respectively handle the code that directly manages the figma plugin API, and the UI code for the plugin itself.

While the parent object is used to send data to the canvas, you need to do something different to receive data. You can learn about how the window.parent API exists here. In short, iFrames can speak to the parent windows. As the Figma Plugin ecosystem runs in a iFrame, this is how the postMessages are exchanged.

To receive data from the figma api, you need to setup a postMessage from the code.ts file, which has access to the figma object.

In my case, the example is that I would like to access the latest selected items from the figma canvas, when the user has selected something new. To do that, I have the following code which creates an event listener on the figma object, and then broadcasts a postMessage containing that information.

This is happening from the code.ts file

Once the figma object broadcasts the message, the React app can then receive the message. To receive this from the React application, you can create a simple EventListener on message.

Now the part that was unintuitive, given the example, was that the React app listens directly to the window object to receive the data broadcasted from the code.ts file. You can see an example below.

Event listener which can live anywhere in your React app (ie. ui.tsx)

As you can see, to listen for the event in the React application, the window.addEventListener is used, as opposed to parent.addEventListener. This is done because the React application is unable to setup event listeners on the parent, due to cross-origin rules. To bypass this, you can access the window object, and the postMessage API properly passes the data that was broadcasted from the code.ts file.

To summarize, to get data from the React application to the Figma plugin, you use parent.postMessage in your React code, which is demo-ed as the ui.tsx file. To get data from the Figma canvas into the React application, you need to broadcast a postMessage message using the figma.ui.postMessage method (demo-ed from code.ts), which then can be listened to from the React application using the window.addEventListener.

I hope this helps if you are looking to send data from the Figma Plugin to your React application!

Filed Under: programming Tagged With: figma

Primary Sidebar

Recent Posts

  • Thoughts on my 33rd birthday
  • Second order effects of companies as content creators
  • Text rendering stuff most people might not know
  • Why is video editing so horrible today?
  • Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

Archives

  • August 2022
  • February 2021
  • October 2020
  • September 2020
  • August 2020
  • December 2019
  • March 2019
  • February 2019
  • November 2018
  • October 2018
  • April 2018
  • January 2018
  • December 2017
  • October 2017
  • July 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • August 2016
  • May 2016
  • March 2016
  • November 2015
  • October 2015
  • September 2015
  • July 2015
  • June 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • October 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012

Tags

  • 10 year reflection (1)
  • 100 posts (2)
  • 2013 (1)
  • academia (2)
  • Advertising (3)
  • aging (1)
  • Agriculture (1)
  • analytics (3)
  • anarchy (1)
  • anonymous (1)
  • api (1)
  • arizona (1)
  • Art (2)
  • art history (1)
  • artfound (1)
  • Artificial Intelligence (2)
  • balance (1)
  • banksy (1)
  • beacon (1)
  • Beacons (1)
  • beast mode crew (2)
  • becausewilliamshatner (1)
  • Big Data (1)
  • Birthday (1)
  • browsers (1)
  • buddhism (1)
  • bundling and unbundling (1)
  • china (1)
  • coding (1)
  • coffeeshoptalk (1)
  • colonialism (1)
  • Communication (1)
  • community development (1)
  • Computer Science (1)
  • Computer Vision (6)
  • crowdsourcing (1)
  • cyber security (1)
  • data migration (1)
  • Deep Learning (1)
  • design (1)
  • designreflection (1)
  • Developer (1)
  • Digital Humanities (2)
  • disruption theory (1)
  • Distributed Teams (1)
  • drawingwhiletalking (16)
  • education (3)
  • Email Marketing (3)
  • email newsletter (1)
  • Employee Engagement (1)
  • employment (2)
  • Engineering (1)
  • Enterprise Technology (1)
  • essay (1)
  • Ethics (1)
  • experiement (1)
  • fidgetio (38)
  • figma (2)
  • film (1)
  • film industry (1)
  • fingerpainting (8)
  • first 1000 users (1)
  • fonts (1)
  • forms of communication (1)
  • frontend framework (1)
  • fundraising (1)
  • Future Of Journalism (3)
  • future of media (1)
  • Future Of Technology (2)
  • Future Technology (1)
  • game development (2)
  • Geospatial (1)
  • ghostio (1)
  • github (2)
  • global collaboration (1)
  • god damn (1)
  • google analytics (1)
  • google docs (1)
  • Graffiti (23)
  • graffitifound (1)
  • graffpass (1)
  • growth hacking (1)
  • h1b visa (1)
  • hackathon (1)
  • hacking (1)
  • hacking reddit (2)
  • Hardware (1)
  • hiroshima (1)
  • homework (1)
  • human api (1)
  • I hate the term growth hacking (1)
  • ie6 (1)
  • ifttt (4)
  • Image Recognition (1)
  • immigration (1)
  • instagram (1)
  • Instagram Marketing (1)
  • internet media (1)
  • internet of things (1)
  • intimacy (1)
  • IoT (1)
  • iteration (1)
  • jason shen (1)
  • jobs (2)
  • jrart (1)
  • kickstart (1)
  • king robbo (1)
  • labor market (1)
  • Leonard Bogdonoff (1)
  • Literacy (1)
  • location (1)
  • Longform (2)
  • looking back (1)
  • los angeles (1)
  • Machine Learning (13)
  • MadeWithPaper (106)
  • making games (1)
  • management (1)
  • maps (2)
  • marketing (4)
  • Marketing Strategies (1)
  • Media (3)
  • medium (1)
  • mentor (1)
  • message (1)
  • mindmeld games (1)
  • Mobile (1)
  • Music (2)
  • Music Discovery (1)
  • neuroscience (2)
  • new yorker (1)
  • Newspapers (3)
  • nomad (1)
  • notfootball (2)
  • npaf (1)
  • odesk (1)
  • orbital (14)
  • orbital 2014 (14)
  • orbital class 1 (9)
  • orbitalnyc (1)
  • paf (2)
  • paid retweets (1)
  • painting (1)
  • physical web (1)
  • pitching (2)
  • popular (1)
  • post production (1)
  • Privacy (1)
  • process (1)
  • product (1)
  • Product Development (2)
  • product market fit (2)
  • Programming (6)
  • project reflection (1)
  • promotion (1)
  • prototype (17)
  • prototyping (1)
  • Public Art (1)
  • Public Speaking (1)
  • PublicArtFound (15)
  • Publishing (3)
  • Python (1)
  • quora (1)
  • Rails (1)
  • React (1)
  • React Native (1)
  • real design (1)
  • recent projects (1)
  • reddit (3)
  • redesign (1)
  • reflection (2)
  • rememberlenny (1)
  • Remote work (1)
  • replatform (1)
  • Responsive Emails (1)
  • retweet (1)
  • revenue model (1)
  • rick webb (1)
  • robert putnam (1)
  • ror (1)
  • rubyonrails (1)
  • segmenting audience (1)
  • Semanticweb (2)
  • Senior meets junior (1)
  • SGI (1)
  • Side Project (1)
  • sketching (22)
  • social capital (1)
  • social media followers (2)
  • social media manipulation (1)
  • social media marketing (1)
  • social reach (5)
  • software (3)
  • Soka Education (1)
  • Spatial Analysis (2)
  • spotify (1)
  • stanford (2)
  • Startup (21)
  • startups (7)
  • stree (1)
  • Street Art (4)
  • streetart (5)
  • stylometrics (1)
  • Technology (1)
  • thoughts (1)
  • Time as an asset in mobile development (1)
  • Towards Data Science (4)
  • TrainIdeation (42)
  • travel (1)
  • traveling (1)
  • tumblr milestone (2)
  • twitter (1)
  • twitter account (2)
  • typography (2)
  • unreal engine (1)
  • user behavior (1)
  • user experience (3)
  • user research (1)
  • user testing (1)
  • variable fonts (1)
  • video editing (2)
  • visual effects (1)
  • warishell (1)
  • Web Development (8)
  • webdec (1)
  • webdev (13)
  • windowed launch (1)
  • wordpress (1)
  • Work Culture (1)
  • workinprogress (1)
  • zoom (1)