• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Remember Lenny

Writing online

  • Portfolio
  • Email
  • Twitter
  • LinkedIn
  • Github
You are here: Home / Archives for programming

programming

Text rendering stuff most people might not know

October 10, 2020 by rememberlenny

I was stuck on a problem that I wanted to write out. The problem I was trying to solve could be simplified to the following:

  1. I have a box in the browser with fixed dimensions.
  2. I have a large number of words, which vary in size, which will fill the box.
  3. If a full box was considered a “frame”, then I wanted to know how many frames I would have to use up all the words.
  4. Similarly, I needed to know which frame a word would be rendered in.

This process is simple if the nodes are all rendered on a page, because the dimensions of the words could be individually calculated. Once each word has a width/height, then its just a matter of deciding how many can fit in each row, until its filled, and also how many rows you can have before the box is filled.

I learned this problem is similar to the knapsack problem, bin/rectangle packing, or the computer science text-justification problem.

The hard part was deciding how to gather the words dimensions, considering the goal is to calculate the information before the content is rendered.

Surprisingly, due to my experience with fonts, I am quite suited to solving this problem – and I thought I would jot down notes for anyone else. When searching for the solution, I noticed a number of people in StackOverflow posts saying that this was a problem that could not be solved, for a variety of correct-sounding, but wrong, answers.

When it comes to text rendering in a browser, there are two main steps that take place, which can be emulated in JavaScript. The first is text shaping, and the second is layout.

The modern ways of handling these are C++ libraries called Freetype and Harfbuzz. The two of these libraries combined will read a font file, render glyphs in a font, and then layout the rendered glyphs. While this sounds trivial, it’s important because behind-the-scenes a glyph is more or less a vector, which needs determine how it will be displayed on a screen. Also each glyph will be laid out depending on its usage context. It will render differently based on what characters its next to, where in a sentence or line it is location.

https://twitter.com/rememberlenny/status/1314730744581967878?s=20

Theres a lot that can be said about the points above, which I am far from an expert on.

The key points to take away is that you can calculate the bounding box of a glyph/word/string given the font and the parameters for rendering the text.

I have to thank Rasmus Andersson for taking time to explain this to me.

Side note

Today, I had a problem that I couldn’t figure out for the life of me. It may have been repeated nights of not sleeping, but also it was a multiple layered problem that I intuitively understood. I just didn’t have a framework for breaking it apart and understanding how to approach it. In the broad attempt to see if I could get the internet’s help, I posted a tweet, with a Zoom link and called for help. Surprisingly, it was quite successful and over a two hour period, I was able to find a solution.

I’m genuinely impressed by the experience, and highly encourage others to do the same.

One more note, this is a great StackOverflow answer: https://stackoverflow.com/questions/43140096/reproduce-bounding-box-of-text-in-browsers

Filed Under: programming Tagged With: fonts, typography

Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

September 8, 2020 by rememberlenny

See this video summary at the bottom of the post, or by clicking this picture.
Important update: The statement that Google Fonts only displays a single variable font axis was wrong. Google Fonts now has a variable font axis registry, which displays the number of non-weight axes that are available on their variable fonts. View the list here: https://fonts.google.com/variablefonts

Variable fonts are a new technology that allows a single font file to render a range designs. A traditional font file normally corresponds to a single weight or font style (such as italics or small caps). If a user users a bold and regular font weight, that requires two separate font files, which respectively correspond to each font weight. Variable fonts allow for a single font file to take a parameter and render various font weights. One font file can then render thin, regular, and bold based on font variation settings used to invoke the font. Even more, the variable font files can also render everything between those various “static instances”, allowing for intrigue expressibility.

At a high level, variable fonts aren’t broadly “better” than static fonts, but allow for tradeoffs that can potentially benefit an end user. For example, based on the font’s underlying glyph designs, a single variable font file can actually be smaller in byte size than multiple static font files, while offering the same visual expressibility. While the size does depend on the font glyph’s “masters”, another beneficial factor is that a single variable font requires less network requests to accomplish a wide design space.

Example how Figma canvas is rendering “Recursive” variable font with various axis values.

Outside of technical benefits, variable fonts provide an incredible potential for design flexibility which isn’t possible with static instances alone. The example of a variable font and font weight was given above, but actually a variable font can have any number of font axes based on the designers wishes. Another common font axis is the “slant” axis, which allows a glyph to go between being italics and upright. Rather than being a boolean switch, in many cases, the available design space is a range which provides for potential around intentional font animation/transitions as well.

Key terminology:

Design space: the range of visual ways which a font file can be rendered, based on the font designers explicit intention. Conceptually, this can be visualized as a multidimensional space, and the glyph’s visual composition is a single point in the space.

Variable axis: A single parameter which can be declared to determine a fonts design space. For example, the weight axis.

Variable font settings: The compilation of variable axis definitions, which are passed to a variable font and determine the selected design space. 

Static instances: An assigned set of font axis settings, often stored with a name that can be accessed from the font. For example, “regular 400” or “black 900”.

Importantly, variable fonts are active and available across all major browsers. Simply load them in as a normal font, and pass the variable-font-settings css property to explicitly declare the passed variable axis parameters.

Google Font’s variable fonts filter.

As you can see here, a normal font weight declaration or a font style declaration would look like this, but a variable font style definition allows for a wider range of expression.

Google Fonts is currently a major web font service that makes using variable fonts extremely easy. Their font directory allows for filtering on variable fonts, and the font specimens pages allow you to sample the font’s static instances as well as the font weight variable axis. While Google Fonts serves variable fonts, they are currently limiting their API to single font weight axes.

Inter font’s weight and slant axis

One popular font, beloved by developers and designers alike is Inter, which was designed by Rasmus Andersson. Inter contains a weight access, as you can see from the Google Fonts specimen page. If you go directly to the Inter specimen website, you can actually see that it also contains a second font axis – the slant axis, which was mentioned above. 

From the specimen page, you can also see that assigning the weight and slant can allow for use cases that make it invoke different feelings of seriousness, casualness, and legibility. While changing the font weight can make it easier to read, based on the size of the font, it can also be combined with colors (for example in dark mode) to stand out more in the page’s visual hierarchy.

Another font to show as an example is Stephen Nixon’s Recursive. Recursive can also be found on Google Fonts, but again by going to the font’s own specimen page, you can experiment with its full design space. Recursive contains three font axes that are unique: expression, cursive and mono. Additionally, as you can see, certain glyphs in the font will change based on the combined assigned font axis values. One example is the lowercase “a”, as well as the lowercase “g”.

Example of the “a” and “g” on the Recursive font’s glyph changes

For Recursive, some of the font axes are boolean switches, as opposed to ranges. The font is either mono or not. Also the range values can be explicitly limited, such as with the cursive axis which is either on/off/auto.

Side note – with Inter, one thing to note that was glanced over is how changes in the font’s weight axis actually result in changing the width of the font glyph. For Recursive, which has a “mono” axis, the weight is explicitly not meant to adjust the width of a font glyph. While not found in either of these two fonts, a very useful font axis which is sometimes found is the “grade” axis, which allows for glyphs to become thicker, without expanding in width.

All of this is a quick overview, but if you are interested in learning more, do check out TypeNetwork’s variable font resource to see some interactive documentation.

Beyond the browser, major Adobe software products as well as Sketch now renders basic font axis sliders to customize variable fonts. As I switch between code and design software, I was surprised to find that Figma was one of the few design softwares that wasn’t compatible with variable fonts and their variable font settings. That being said, they do have an incredible plugin API which allows someone to potentially hack together a temporary solution until they have time to implement them fully.

In the next blog post, I’ll go into how Figma’s plugin architecture lets you render variable fonts as SVG vector glyphs.

Filed Under: frontend, programming Tagged With: figma, typography, variable fonts

React Figma Plugin – How to get data from the canvas to your app

September 2, 2020 by rememberlenny

I had much too hard of a time groking the Figma Plugin’s documentation, and thought I would leave a note for any brave souls to follow.

Figma has a great API and documentation around how to make a plugin on the desktop app. When writing a plugin, you have access to the entire Figma canvas of the active file, to which you can read/write content. You also have quite a lenient window API from which you can make external requests and do things such as download assets or OAuth into services.

All of this being said, you are best off learning about what you can do directly from Figma here.

If you are like me, and working on a plugin, which you have decided to write in React, then you may encounter the desire to receive callback events from the Figma canvas in your app. In my case, I wanted the React application to react to the updated user selection, such that I could access the content of a TextNode, and update the plugin content accordingly.

To do this, I struggled with the Figma Plugin examples to understand how to access data from the canvas and into my app. The Figma Plugin examples, which can be found here, have a React application sample which sends data to the canvas, but not the other way around. While this is seemingly straight forward, I didn’t immediately absorb the explanations from Figma Plugin website.

In retrospect, the way to do this is quite simple.

First, the Figma Plugin API uses the Window postMessage API to transmit information. This is explained in the Plugin documentation with a clear diagram which you can see here:

The first thing to note from this diagram is the postMessage API, which I mentioned above. The second thing is that the postMessage API is bi-directional, and allows for data to go from the app to the canvas, and vice-versa.

Practically speaking, the React Figma Plugin demo shows this in the example

This is part of the React app, which is in the ui.tsx file

In the example, the postMessage API is using the window.parent object to announce from the React app to the Figma canvas. Specifically, from the plugin example, there are two JavaScript files – code.ts and ui.tsx, which respectively handle the code that directly manages the figma plugin API, and the UI code for the plugin itself.

While the parent object is used to send data to the canvas, you need to do something different to receive data. You can learn about how the window.parent API exists here. In short, iFrames can speak to the parent windows. As the Figma Plugin ecosystem runs in a iFrame, this is how the postMessages are exchanged.

To receive data from the figma api, you need to setup a postMessage from the code.ts file, which has access to the figma object.

In my case, the example is that I would like to access the latest selected items from the figma canvas, when the user has selected something new. To do that, I have the following code which creates an event listener on the figma object, and then broadcasts a postMessage containing that information.

This is happening from the code.ts file

Once the figma object broadcasts the message, the React app can then receive the message. To receive this from the React application, you can create a simple EventListener on message.

Now the part that was unintuitive, given the example, was that the React app listens directly to the window object to receive the data broadcasted from the code.ts file. You can see an example below.

Event listener which can live anywhere in your React app (ie. ui.tsx)

As you can see, to listen for the event in the React application, the window.addEventListener is used, as opposed to parent.addEventListener. This is done because the React application is unable to setup event listeners on the parent, due to cross-origin rules. To bypass this, you can access the window object, and the postMessage API properly passes the data that was broadcasted from the code.ts file.

To summarize, to get data from the React application to the Figma plugin, you use parent.postMessage in your React code, which is demo-ed as the ui.tsx file. To get data from the Figma canvas into the React application, you need to broadcast a postMessage message using the figma.ui.postMessage method (demo-ed from code.ts), which then can be listened to from the React application using the window.addEventListener.

I hope this helps if you are looking to send data from the Figma Plugin to your React application!

Filed Under: programming Tagged With: figma

From webdev to computer vision and machine learning

October 13, 2017 by rememberlenny

For the past four years, I’ve had the notion that I was going to start a company. The underlying feeling was that I wanted to work on something that I could passionately take full responsibility for. Although this has been a constant desire, I have not taken the concrete steps to make this a reality. Instead, I have developed a career writing software and learning the contracting market. Rather than develop a business plan and try to raise money, I have built and released many side-projects. Each project has given me greater understanding about a technology or field of interest.

I have actively thrown myself into whatever tasks and opportunities I had in front of me. As a result, I’ve been able to meet numerous talented and amazing people in the media, art, tech, and social-cause oriented spaces.

I started web development around the time mobile development became important. I watched JavaScript explode from a complementary skillset with html and css, to the primary language needed to understand a seemingly unending amalgamation of frameworks.

The interesting projects I would hear about were related to building new social networks or building off of existing ones. Mobile location technology was all the rage, but not fully matured and photo sharing services were growing in influence. I even made an photo-based iOS application myself, while making a very conscious effort to not needlessly recreate Instagram.

Whats next

Most recently, I‘ve been really excited about mapping technologies and the emergence of more real-time related applications. When I noticed the popularity of “big data” open source projects like Hadoop and Spark, I didn’t feel like I had anything to experiment with. I tried my hand at creating an online market-platform site. I saw the rise of monthly-box-for-X services and considered how hard it would be to create a worthwhile logistics service that could fall under a monthly-box-for-X, or a more attractive uber-for-X. I even played with what is possible with IoT devices and how I might go about producing something if I validated an idea worth manufacturing. I researched bluetooth specifications, power delivery mechanisms, and wondered what interesting art-oriented applications I could scrap together. Overall, I never committed enough to fully see the fruits of my exploration. Yet, all the processes were valuable for my own growth.

Now, it feels as if there is even more to explore, and while I don’t know how, I can clearly feel that the path forward will be greater in scale.

I’m excited about the blockchain, machine learning, and new augmented/virtual reality. Of all new emerging technologies, I’ve spent the most time trying to understand and utilize machine learning for practical purposes. While I understand the blockchain in theory, I don’t feel any deep affinity for the product. I’m not driven by the anti-establishment/pro-sovereignty ideologies that fuel the crypto culture. I also don’t find the AR or VR space as interesting as those I know who have gone “all in”. I like the idea of a physical space complemented with virtual layers, but haven’t had any “Aha!” moments around how to execute the process.

Computer Vision

Image analysis and the seemingly interesting data that can be extracted through machine learning continues to pique my interest. Further, the horizon of changes in transportation (read: self-driving cars) gives me conviction that real-time location-dependent image analysis data is going to have growing importance.

Academia

I feel there is interesting work being done in the academic and private sectors for both of these areas. In the academic sector, I previously saw reputable universities doing a lot of image-oriented work that wasn’t immediately interesting. The highly theoretical work around compression, color, or the like are not appealing to me. The work around medical image analysis seems like a large field, but is completely unattractive in my mind. I saw numerous research projects around the field of 2D-image-to-3D space translation. In an isolated study, these image-to-X projects aren’t interesting, but the applications of these studies in the real world seem worth exploring.

I would like to spend a clear amount of time to fully grok the academic landscape of image-related research being done. I think reviewing the top universities around the United States as well as around the world would be highly enlightening for myself and whatever I may do.

I’ve tried to determine if returning to school for a masters program in further study would be worthwhile. From my limited exposure, I haven’t found a reason why this would be critical, but I can imagine the fixed time to focus on an isolated topic would be highly beneficial. The opportunity to surround myself with likeminded people seems worthwhile.

Private sector

I have been continually fascinated with the idea of using a camera as a multi-use sensor. The software-oriented computer vision tasks that exist seem highly unexplored. This is something I would like to spend more time to map out for myself. The newer interesting applications, such as self-driving cars, and the improvements around object recognition is fascinating. I know there are tons of other areas of interest that make this field worth exploring. I want to learn about everything from satellites to security cameras, advertising to real-estate, self-driving fleet vehicles to augmented reality cameras.

From the “look what may exist 20 years from now, and ask yourself how to apply it to today” perspective, this seems like the most exciting field to me. I would like to commit significant time to understanding the future-to-be opportunities, ecosystem, and strengths/weaknesses of this space when applied to real world problems.

Beyond technical exposure, the philosophical implications of having a society devoid of privacy is scary. Practical commercial value feels like an area that will continue to develop into the future and with it, public understanding. Rather than waiting for the Edward Snowden big-data equivalent moment for camera technology, as Gary Chou succinctly put it, the social understanding and regulatory boundaries need to be ironed out. That being said, cameras will inevitably be everywhere and data from them will be mined by private companies. I’d like to be on the side of determining the positive value that can be created here.

Location

Mapping technology has continued to pique my interest. I started out wanting to learn more when I was doing graffiti and wondering how to avoid getting caught. “Writers” would talk about how the police used mapping technology for pinpointing artists. The running joke was that an artist’s biggest fan was the police. They had the most photos and biggest record of all the work done by a person. I recall how cities would catch artists by mapping all the reported “tags” from a single person. Through mapping the locations, the artist’s home neighborhood could often be inferred, and the overall investigation significantly narrowed. This was 15 years ago. I have no idea what the official process of this kind of data analysis is called, but I’m sure it has improved since then.

I can imagine the value when applying the same modes of analysis to any other dataset, be it social media data or commercial behavior. I’d like to deepen my understanding around this area. I imagine there is a lot to be learned around the geospatial analysis often used in natural resource prospecting. I want to learn more about how to make use of satellites. This is a superpower that was never accessible in the past, unless you were a nation-state. Hedge funds are now using satellite imagery to analyze the supply-demand of the retail industry by analyzing parking lots and transport vehicles across the world. The parallels here to other industries seem equally endless and untapped. With the adoption of cameras everywhere on the ground, I imagine the same means of analysis over time to be highly valuable.

I want to commit more time to understanding the overall problems in the space surrounding geospatial data over time. I imagine many opportunities when this is teamed up with image analysis over time and in real-time. I’d like to further understand the private sector applications and the academic fields best-suited to implement these technologies.

Overall

I slipped into programing at a time that made it possible for me to have one step ahead of the then-current incumbents. I feel like now, again, I am in a place where my past experience gives me unique insight into what I could further explore. The fields that are of particular interest, image analysis and geospatial data, seem worth understanding and are now as approachable as they will ever be. Given that my personal life and career is financially secure, I will spend the next two years personally exploring and understanding these fields. My goal is to deepen my ability to see how these fields can be used for commercial gain. I will deepen my outlook into the forefront of academic research, private sector practices. Ideally, through this process of exploration, I can document my findings for others who also find themselves interested but unsure how to further explore these areas.


Thanks to Gary Chou for suggesting to write this and help with synthesizing ideas. And thanks to Jihii Jolly for fixing the editing nightmare.

Filed Under: programming Tagged With: Computer Vision, Future Technology, Geospatial, Machine Learning, Towards Data Science

Primary Sidebar

Recent Posts

  • Thoughts on my 33rd birthday
  • Second order effects of companies as content creators
  • Text rendering stuff most people might not know
  • Why is video editing so horrible today?
  • Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

Archives

  • August 2022
  • February 2021
  • October 2020
  • September 2020
  • August 2020
  • December 2019
  • March 2019
  • February 2019
  • November 2018
  • October 2018
  • April 2018
  • January 2018
  • December 2017
  • October 2017
  • July 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • August 2016
  • May 2016
  • March 2016
  • November 2015
  • October 2015
  • September 2015
  • July 2015
  • June 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • October 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012

Tags

  • 10 year reflection (1)
  • 100 posts (2)
  • 2013 (1)
  • academia (2)
  • Advertising (3)
  • aging (1)
  • Agriculture (1)
  • analytics (3)
  • anarchy (1)
  • anonymous (1)
  • api (1)
  • arizona (1)
  • Art (2)
  • art history (1)
  • artfound (1)
  • Artificial Intelligence (2)
  • balance (1)
  • banksy (1)
  • beacon (1)
  • Beacons (1)
  • beast mode crew (2)
  • becausewilliamshatner (1)
  • Big Data (1)
  • Birthday (1)
  • browsers (1)
  • buddhism (1)
  • bundling and unbundling (1)
  • china (1)
  • coding (1)
  • coffeeshoptalk (1)
  • colonialism (1)
  • Communication (1)
  • community development (1)
  • Computer Science (1)
  • Computer Vision (6)
  • crowdsourcing (1)
  • cyber security (1)
  • data migration (1)
  • Deep Learning (1)
  • design (1)
  • designreflection (1)
  • Developer (1)
  • Digital Humanities (2)
  • disruption theory (1)
  • Distributed Teams (1)
  • drawingwhiletalking (16)
  • education (3)
  • Email Marketing (3)
  • email newsletter (1)
  • Employee Engagement (1)
  • employment (2)
  • Engineering (1)
  • Enterprise Technology (1)
  • essay (1)
  • Ethics (1)
  • experiement (1)
  • fidgetio (38)
  • figma (2)
  • film (1)
  • film industry (1)
  • fingerpainting (8)
  • first 1000 users (1)
  • fonts (1)
  • forms of communication (1)
  • frontend framework (1)
  • fundraising (1)
  • Future Of Journalism (3)
  • future of media (1)
  • Future Of Technology (2)
  • Future Technology (1)
  • game development (2)
  • Geospatial (1)
  • ghostio (1)
  • github (2)
  • global collaboration (1)
  • god damn (1)
  • google analytics (1)
  • google docs (1)
  • Graffiti (23)
  • graffitifound (1)
  • graffpass (1)
  • growth hacking (1)
  • h1b visa (1)
  • hackathon (1)
  • hacking (1)
  • hacking reddit (2)
  • Hardware (1)
  • hiroshima (1)
  • homework (1)
  • human api (1)
  • I hate the term growth hacking (1)
  • ie6 (1)
  • ifttt (4)
  • Image Recognition (1)
  • immigration (1)
  • instagram (1)
  • Instagram Marketing (1)
  • internet media (1)
  • internet of things (1)
  • intimacy (1)
  • IoT (1)
  • iteration (1)
  • jason shen (1)
  • jobs (2)
  • jrart (1)
  • kickstart (1)
  • king robbo (1)
  • labor market (1)
  • Leonard Bogdonoff (1)
  • Literacy (1)
  • location (1)
  • Longform (2)
  • looking back (1)
  • los angeles (1)
  • Machine Learning (13)
  • MadeWithPaper (106)
  • making games (1)
  • management (1)
  • maps (2)
  • marketing (4)
  • Marketing Strategies (1)
  • Media (3)
  • medium (1)
  • mentor (1)
  • message (1)
  • mindmeld games (1)
  • Mobile (1)
  • Music (2)
  • Music Discovery (1)
  • neuroscience (2)
  • new yorker (1)
  • Newspapers (3)
  • nomad (1)
  • notfootball (2)
  • npaf (1)
  • odesk (1)
  • orbital (14)
  • orbital 2014 (14)
  • orbital class 1 (9)
  • orbitalnyc (1)
  • paf (2)
  • paid retweets (1)
  • painting (1)
  • physical web (1)
  • pitching (2)
  • popular (1)
  • post production (1)
  • Privacy (1)
  • process (1)
  • product (1)
  • Product Development (2)
  • product market fit (2)
  • Programming (6)
  • project reflection (1)
  • promotion (1)
  • prototype (17)
  • prototyping (1)
  • Public Art (1)
  • Public Speaking (1)
  • PublicArtFound (15)
  • Publishing (3)
  • Python (1)
  • quora (1)
  • Rails (1)
  • React (1)
  • React Native (1)
  • real design (1)
  • recent projects (1)
  • reddit (3)
  • redesign (1)
  • reflection (2)
  • rememberlenny (1)
  • Remote work (1)
  • replatform (1)
  • Responsive Emails (1)
  • retweet (1)
  • revenue model (1)
  • rick webb (1)
  • robert putnam (1)
  • ror (1)
  • rubyonrails (1)
  • segmenting audience (1)
  • Semanticweb (2)
  • Senior meets junior (1)
  • SGI (1)
  • Side Project (1)
  • sketching (22)
  • social capital (1)
  • social media followers (2)
  • social media manipulation (1)
  • social media marketing (1)
  • social reach (5)
  • software (3)
  • Soka Education (1)
  • Spatial Analysis (2)
  • spotify (1)
  • stanford (2)
  • Startup (21)
  • startups (7)
  • stree (1)
  • Street Art (4)
  • streetart (5)
  • stylometrics (1)
  • Technology (1)
  • thoughts (1)
  • Time as an asset in mobile development (1)
  • Towards Data Science (4)
  • TrainIdeation (42)
  • travel (1)
  • traveling (1)
  • tumblr milestone (2)
  • twitter (1)
  • twitter account (2)
  • typography (2)
  • unreal engine (1)
  • user behavior (1)
  • user experience (3)
  • user research (1)
  • user testing (1)
  • variable fonts (1)
  • video editing (2)
  • visual effects (1)
  • warishell (1)
  • Web Development (8)
  • webdec (1)
  • webdev (13)
  • windowed launch (1)
  • wordpress (1)
  • Work Culture (1)
  • workinprogress (1)
  • zoom (1)