• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Remember Lenny

Writing online

  • Portfolio
  • Email
  • Twitter
  • LinkedIn
  • Github
You are here: Home / Archives for 2020

Archives for 2020

Text rendering stuff most people might not know

October 10, 2020 by rememberlenny

I was stuck on a problem that I wanted to write out. The problem I was trying to solve could be simplified to the following:

  1. I have a box in the browser with fixed dimensions.
  2. I have a large number of words, which vary in size, which will fill the box.
  3. If a full box was considered a “frame”, then I wanted to know how many frames I would have to use up all the words.
  4. Similarly, I needed to know which frame a word would be rendered in.

This process is simple if the nodes are all rendered on a page, because the dimensions of the words could be individually calculated. Once each word has a width/height, then its just a matter of deciding how many can fit in each row, until its filled, and also how many rows you can have before the box is filled.

I learned this problem is similar to the knapsack problem, bin/rectangle packing, or the computer science text-justification problem.

The hard part was deciding how to gather the words dimensions, considering the goal is to calculate the information before the content is rendered.

Surprisingly, due to my experience with fonts, I am quite suited to solving this problem – and I thought I would jot down notes for anyone else. When searching for the solution, I noticed a number of people in StackOverflow posts saying that this was a problem that could not be solved, for a variety of correct-sounding, but wrong, answers.

When it comes to text rendering in a browser, there are two main steps that take place, which can be emulated in JavaScript. The first is text shaping, and the second is layout.

The modern ways of handling these are C++ libraries called Freetype and Harfbuzz. The two of these libraries combined will read a font file, render glyphs in a font, and then layout the rendered glyphs. While this sounds trivial, it’s important because behind-the-scenes a glyph is more or less a vector, which needs determine how it will be displayed on a screen. Also each glyph will be laid out depending on its usage context. It will render differently based on what characters its next to, where in a sentence or line it is location.

https://twitter.com/rememberlenny/status/1314730744581967878?s=20

Theres a lot that can be said about the points above, which I am far from an expert on.

The key points to take away is that you can calculate the bounding box of a glyph/word/string given the font and the parameters for rendering the text.

I have to thank Rasmus Andersson for taking time to explain this to me.

Side note

Today, I had a problem that I couldn’t figure out for the life of me. It may have been repeated nights of not sleeping, but also it was a multiple layered problem that I intuitively understood. I just didn’t have a framework for breaking it apart and understanding how to approach it. In the broad attempt to see if I could get the internet’s help, I posted a tweet, with a Zoom link and called for help. Surprisingly, it was quite successful and over a two hour period, I was able to find a solution.

I’m genuinely impressed by the experience, and highly encourage others to do the same.

One more note, this is a great StackOverflow answer: https://stackoverflow.com/questions/43140096/reproduce-bounding-box-of-text-in-browsers

Filed Under: programming Tagged With: fonts, typography

Why is video editing so horrible today?

September 15, 2020 by rememberlenny

In the last three months, I have done more video post-production than I have done in the past 12 years. Surprisingly, in these years, nothing seems to have changed. Considering how much media is now machine analyzable content, such as audio and visual, I’m surprised there aren’t more patterns that make navigating and arranging video content faster. Beyond that, I’m surprised there isn’t more process for programmatically composing video in a polished complimentary way to the existing manual methods of arranging.

In 1918, when the video camera was created, if you filmed something and wanted to edit it, you took your footage, cut it and arranged it according to how you wanted it to look. Today, if you want to edit a video, you have to import the source assets into a specialty program (such as Adobe Premiere), and then manually view each item to watch/listen for the portion that you want. Once you have the sections of each imported asset, you have to manually arrange each item on a timeline. Of course a ton has changed, but the general workflow feels the same.

Should Critics and Festivals Give Editing Awards? Yes, and Here's Why |  IndieWire
Real life photo of me navigating my Premiere assets folders

How did video production and editing not get its digital-first methods of creation? Computing power has skyrocketed. Access to storage is generally infinite. And our computers are networked around the world. How is it that the workflow of import, edit, and export take so long?

The consumerization of video editing has simplified certain elements by abstracting away seemingly important but complicated components, such as the linearity of time. Things like Tiktok seem to be the most dramatic shift in video creation, in that the workflow shifts from immediate review and reshooting of video. Over the years, the iMovies and such have moved timelines, from horizontal representation of elapsed time into general blocks of “scenes” or clips. The simplification through abstraction is important for the general consumer, but reduces the attention to detail. This creates an aesthetic of its own, which seems to be the result of the changing of tools.

Where are all the things I take for granted in developer tools, like autocomplete or class-method search, in the video equivalent? What is autocomplete look like in editing a video clip? Where are the repeatable “patterns” I can write once, and reuse everywhere? Why does each item on a video canvas seem to live in isolation from one another, with no awareness of other elements or an ability to interact with each other?

My code editor searches my files and tried to “import” the methods when I start typing.

As someone who studied film and animation exclusively for multiple years, I’m generally surprised that the overall ways of producing content are largely the same as they have been 10 years ago, but also seemingly for the past 100.

I understand that the areas of complexity have become more niche, such as in VFX or multi-media. I have no direct experience with any complicated 3D rendering and I haven’t tried any visual editing for non-traditional video displays, so its a stretch to say film hasn’t changed at all. I haven’t touched the surface in new video innovation, but all considering, I wish some basic things were much easier.

For one, when it comes to visual layout, I would love something like the Figma “autolayout” functionality. If I have multiple items in a canvas, I’d like them to self-arrange based on some kind of box model. There should be a way to assign the equivalent of styles as “classes”, such as with CSS, and multiple text elements should be able to inherit/share padding/margin definitions. Things like flexbox and relative/absolute positioning would make visual templates significantly much easier and faster for developing fresh video content.

Currently I make visual frames in Figma, then export them because its so much easier than fumbling through the 2D translations in Premiere

I would love to have a “smarter” timeline that can surface “cues” that I may want to hook into for visual changes. The cues could make use of machine analyzable features in the audio and video, based on features detected in the available content. This is filled with lots of hairy areas, and definitely sounds nicer than it might be in actuality. At a basic example, the timeline could look at audio or a transcript and know when a certain speaker is talking. There are already services, such as Descript, that make seamless use of speaker detection. That should find some expression in video editing software. Even if the software itself doesn’t detect this information, the metadata from other software should be made use of.

The two basic views in Zoom. Grid or speaker.

More advanced would be to know when certain exchanges between multiple people are a self-encompassed “point”. Identifying when a “exchange” takes place, or when a “question” is “answered”, would be useful for title slides or lower-thirds with complimentary text.

Descript will identify speakers and color code the transcript.

If there are multiple shots of the same take, it would be nice to have the clips note where the beginning and end based on lining up the audio. Reviewing content shouldn’t be done in a linear fashion if there are ways to distinguish content of video/audio clip and compare it to itself or other clips.

In line with “cues”, I would like to “search” my video in a much more comprehensive way. My iPhone photos app lets me search by faces or location. How about that in my video editor? All the video clips with a certain face or background?

Also, it would be nice to generate these “features” with some ease. I personally dont know what it would take to train a feature detector by viewing some parts of a clip, labeling it, and then using the labeled example to find the other instances of similar kinds of visual content. I do know its possible, and that would be very useful for speeding up the editing process.

In my use case, I’m seeing a lot of video recordings of Zoom calls or webinars. This is another example of video content that generally looks the “same” and could be analyzed for certain content types. I would be able to quickly navigate through clips if I could be able to filter video by when the video is a screen of many faces viewed at once, or when only one speaker is featured at a time.

All of this to say, there is a lot of gaps in the tools available at the moment.

Filed Under: video Tagged With: film, post production, Programming, video editing

Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

September 8, 2020 by rememberlenny

See this video summary at the bottom of the post, or by clicking this picture.
Important update: The statement that Google Fonts only displays a single variable font axis was wrong. Google Fonts now has a variable font axis registry, which displays the number of non-weight axes that are available on their variable fonts. View the list here: https://fonts.google.com/variablefonts

Variable fonts are a new technology that allows a single font file to render a range designs. A traditional font file normally corresponds to a single weight or font style (such as italics or small caps). If a user users a bold and regular font weight, that requires two separate font files, which respectively correspond to each font weight. Variable fonts allow for a single font file to take a parameter and render various font weights. One font file can then render thin, regular, and bold based on font variation settings used to invoke the font. Even more, the variable font files can also render everything between those various “static instances”, allowing for intrigue expressibility.

At a high level, variable fonts aren’t broadly “better” than static fonts, but allow for tradeoffs that can potentially benefit an end user. For example, based on the font’s underlying glyph designs, a single variable font file can actually be smaller in byte size than multiple static font files, while offering the same visual expressibility. While the size does depend on the font glyph’s “masters”, another beneficial factor is that a single variable font requires less network requests to accomplish a wide design space.

Example how Figma canvas is rendering “Recursive” variable font with various axis values.

Outside of technical benefits, variable fonts provide an incredible potential for design flexibility which isn’t possible with static instances alone. The example of a variable font and font weight was given above, but actually a variable font can have any number of font axes based on the designers wishes. Another common font axis is the “slant” axis, which allows a glyph to go between being italics and upright. Rather than being a boolean switch, in many cases, the available design space is a range which provides for potential around intentional font animation/transitions as well.

Key terminology:

Design space: the range of visual ways which a font file can be rendered, based on the font designers explicit intention. Conceptually, this can be visualized as a multidimensional space, and the glyph’s visual composition is a single point in the space.

Variable axis: A single parameter which can be declared to determine a fonts design space. For example, the weight axis.

Variable font settings: The compilation of variable axis definitions, which are passed to a variable font and determine the selected design space. 

Static instances: An assigned set of font axis settings, often stored with a name that can be accessed from the font. For example, “regular 400” or “black 900”.

Importantly, variable fonts are active and available across all major browsers. Simply load them in as a normal font, and pass the variable-font-settings css property to explicitly declare the passed variable axis parameters.

Google Font’s variable fonts filter.

As you can see here, a normal font weight declaration or a font style declaration would look like this, but a variable font style definition allows for a wider range of expression.

Google Fonts is currently a major web font service that makes using variable fonts extremely easy. Their font directory allows for filtering on variable fonts, and the font specimens pages allow you to sample the font’s static instances as well as the font weight variable axis. While Google Fonts serves variable fonts, they are currently limiting their API to single font weight axes.

Inter font’s weight and slant axis

One popular font, beloved by developers and designers alike is Inter, which was designed by Rasmus Andersson. Inter contains a weight access, as you can see from the Google Fonts specimen page. If you go directly to the Inter specimen website, you can actually see that it also contains a second font axis – the slant axis, which was mentioned above. 

From the specimen page, you can also see that assigning the weight and slant can allow for use cases that make it invoke different feelings of seriousness, casualness, and legibility. While changing the font weight can make it easier to read, based on the size of the font, it can also be combined with colors (for example in dark mode) to stand out more in the page’s visual hierarchy.

Another font to show as an example is Stephen Nixon’s Recursive. Recursive can also be found on Google Fonts, but again by going to the font’s own specimen page, you can experiment with its full design space. Recursive contains three font axes that are unique: expression, cursive and mono. Additionally, as you can see, certain glyphs in the font will change based on the combined assigned font axis values. One example is the lowercase “a”, as well as the lowercase “g”.

Example of the “a” and “g” on the Recursive font’s glyph changes

For Recursive, some of the font axes are boolean switches, as opposed to ranges. The font is either mono or not. Also the range values can be explicitly limited, such as with the cursive axis which is either on/off/auto.

Side note – with Inter, one thing to note that was glanced over is how changes in the font’s weight axis actually result in changing the width of the font glyph. For Recursive, which has a “mono” axis, the weight is explicitly not meant to adjust the width of a font glyph. While not found in either of these two fonts, a very useful font axis which is sometimes found is the “grade” axis, which allows for glyphs to become thicker, without expanding in width.

All of this is a quick overview, but if you are interested in learning more, do check out TypeNetwork’s variable font resource to see some interactive documentation.

Beyond the browser, major Adobe software products as well as Sketch now renders basic font axis sliders to customize variable fonts. As I switch between code and design software, I was surprised to find that Figma was one of the few design softwares that wasn’t compatible with variable fonts and their variable font settings. That being said, they do have an incredible plugin API which allows someone to potentially hack together a temporary solution until they have time to implement them fully.

In the next blog post, I’ll go into how Figma’s plugin architecture lets you render variable fonts as SVG vector glyphs.

Filed Under: frontend, programming Tagged With: figma, typography, variable fonts

React Figma Plugin – How to get data from the canvas to your app

September 2, 2020 by rememberlenny

I had much too hard of a time groking the Figma Plugin’s documentation, and thought I would leave a note for any brave souls to follow.

Figma has a great API and documentation around how to make a plugin on the desktop app. When writing a plugin, you have access to the entire Figma canvas of the active file, to which you can read/write content. You also have quite a lenient window API from which you can make external requests and do things such as download assets or OAuth into services.

All of this being said, you are best off learning about what you can do directly from Figma here.

If you are like me, and working on a plugin, which you have decided to write in React, then you may encounter the desire to receive callback events from the Figma canvas in your app. In my case, I wanted the React application to react to the updated user selection, such that I could access the content of a TextNode, and update the plugin content accordingly.

To do this, I struggled with the Figma Plugin examples to understand how to access data from the canvas and into my app. The Figma Plugin examples, which can be found here, have a React application sample which sends data to the canvas, but not the other way around. While this is seemingly straight forward, I didn’t immediately absorb the explanations from Figma Plugin website.

In retrospect, the way to do this is quite simple.

First, the Figma Plugin API uses the Window postMessage API to transmit information. This is explained in the Plugin documentation with a clear diagram which you can see here:

The first thing to note from this diagram is the postMessage API, which I mentioned above. The second thing is that the postMessage API is bi-directional, and allows for data to go from the app to the canvas, and vice-versa.

Practically speaking, the React Figma Plugin demo shows this in the example

This is part of the React app, which is in the ui.tsx file

In the example, the postMessage API is using the window.parent object to announce from the React app to the Figma canvas. Specifically, from the plugin example, there are two JavaScript files – code.ts and ui.tsx, which respectively handle the code that directly manages the figma plugin API, and the UI code for the plugin itself.

While the parent object is used to send data to the canvas, you need to do something different to receive data. You can learn about how the window.parent API exists here. In short, iFrames can speak to the parent windows. As the Figma Plugin ecosystem runs in a iFrame, this is how the postMessages are exchanged.

To receive data from the figma api, you need to setup a postMessage from the code.ts file, which has access to the figma object.

In my case, the example is that I would like to access the latest selected items from the figma canvas, when the user has selected something new. To do that, I have the following code which creates an event listener on the figma object, and then broadcasts a postMessage containing that information.

This is happening from the code.ts file

Once the figma object broadcasts the message, the React app can then receive the message. To receive this from the React application, you can create a simple EventListener on message.

Now the part that was unintuitive, given the example, was that the React app listens directly to the window object to receive the data broadcasted from the code.ts file. You can see an example below.

Event listener which can live anywhere in your React app (ie. ui.tsx)

As you can see, to listen for the event in the React application, the window.addEventListener is used, as opposed to parent.addEventListener. This is done because the React application is unable to setup event listeners on the parent, due to cross-origin rules. To bypass this, you can access the window object, and the postMessage API properly passes the data that was broadcasted from the code.ts file.

To summarize, to get data from the React application to the Figma plugin, you use parent.postMessage in your React code, which is demo-ed as the ui.tsx file. To get data from the Figma canvas into the React application, you need to broadcast a postMessage message using the figma.ui.postMessage method (demo-ed from code.ts), which then can be listened to from the React application using the window.addEventListener.

I hope this helps if you are looking to send data from the Figma Plugin to your React application!

Filed Under: programming Tagged With: figma

Turning 31, and reflecting on the past 10 years

August 22, 2020 by rememberlenny

2020 Soka University of America

Tomorrow is my birthday, and I turn 31 years old. I didn’t think as much about what turning 30 meant last year, so taking a pause to write down some thoughts about the last 10 years, and the next 10. Taking time to think about my happiness, goals for future, current life circumstances, and future concerns. There is a global pandemic going on, but outside of this sentence, I won’t mention it again.

Ten years ago, in 2010, I lived in Aliso Viejo, had a blackberry, an HP laptop, and drove a stick shift Saturn. I had been back to college for one entire year, after dropping out and moving to Los Angeles, where I lived in a one bedroom apartment with ten other tenants, while figuring out what I was trying to do with my life. Prior, I worked a graveyard shift at CVS (10pm to 6am), and found time to paint graffiti on rooftops and empty streets. By the Fall semester of 2010, most of my classmates were on study abroad, and I was in the midst of a clinical trial to pay off some credit card debt I accumulated. My best friend was in rehab, and I began taking my Buddhist practice seriously to ground myself and master my own tendencies. I began painting canvases as a means of blowing off steam, and also began exploring how to program. I worked in a concert hall as a part-time gig, and pursued contract web development opportunities on craigslist.

Ten years ago, I wasn’t on Twitter, I didn’t know who Paul Graham or Peter Thiel was, I didn’t know anyone who worked at or ran a startup, and I wasn’t on a pursuit for wealth or riches. My parents were finally settling into a regular life, after our family went through a drawn out bankruptcy and legal dispute with a nation state. And a reasonable trajectory for me was either grad school or a well paying job. I recall seriously thinking I would attend law school, reading the One-L, and even starting a club at school to prepare for the LSAT.

By 2010, I had developed interest in the digital humanities and online culture, specifically from the lens of communication theory. I researched everything I could about Reddit, and believed that the intellectual and social capital found in subreddits was a completely untapped resource. The Arab spring was erupting and I recognized that the live updates found from direct experts in subreddits was far more valuable than anything the news was publishing. The on-the-ground photos and clear community run explainers felt like the future.


My 2010 paint stash

As a student, I was studying Chinese – poorly – and doubled down on an academic interest in propaganda, marketing, and what “dialogue” looked like online. I thought the Chinese influence in Africa was the most under appreciated power play for the century. I was also pursuing a personal interest in local politics, by attending city council meetings – and considering a possibly future in education, which I vetted through volunteering at a nearby alternative school. 

In a nutshell, 2010 was the decline in my academic focus, and my shift toward a pragmatic commercial future. I had peaked my interest in getting the best grades, and shifted to wanting to learn the most without wasting my time. I was considering moving to China after college, where I thought I could get experience in wholesale trading and factory production, but also considered the values of knowing Mandarin for what ever global political shifts were to come. 

Outside of the Arab Spring, I wasn’t mentally or emotionally invested in the US occupation of the Middle East, and had been largely apolitical outside of the excitement around Barak Obama’s presidency. 

In 2011, I was elected as the attorney general of the school student government, and did my best to contribute to the university’s mission. I can’t point to any long lasting change that was made, but that period gave me a close look into the operations of a Robert’s Rules oriented body. Looking back, I was hyper focused on the “creation of value” and maximizing my time spent.

Since I had dropped out of college for a semester in 2008, all of my entering classmates had graduated by the spring of 2011, and I was absorbed into the following 2012 classes’ student body. At this point, I also moved out of the college dorm, which 95% of student body lived in, and commuted by bike from my parents home to attend classes.

Over the next two years, the major events in my life were my older brother’s death, and moving to China. 

I arrived in China in the winter of 2012. I began my travels with a month long trip with a classmate in the Western Yunnan province. We spent a month going between tropical climates in the south to the coldest areas in the north bordering Tibet. The month of traveling was what I thought would be a good transition between my final semester of school.

From Yunnan, I took a two day overnight train to Shanghai, crossing from the Western most part to Eastern most part of China. I was ready to start school, and also kept my attention on the possibility of finding work. I also connected with local street artists, who had a graffiti studio. This group of friends ended up becoming major influence in my following decisions.

“Gift” mural on a roof top in Shanghai

While I started language classes, I quickly realized I did not feel the effort needed to succeed would necessarily result in learning the language. Instead, I felt the classes would be a waste of time, and given the recent events in my life, there were more important things I could focus on. I ended up finding a part time job as a bartender, actively engaged myself in the startup tech community, and began actively soliciting work as a web developer.

Surprising to myself, I was able to find quite a bit of work very quickly, and realized that school was not necessary for me to take the next steps in my life. Not completing the one semester I had left for a college diploma was not the greatest of choices, but at the time, I considered it would be possible to finish later.

At my peak in China, I had grown quite comfortable with working multiple jobs, and furthering my programming ability. I had a very fortunate series of events, during which the one weekend I took off for a school trip, my bartending workplace was raided by the police for employing international workers. This reinforced my doubling down on programming related income streams, which by that time I had plenty. I grew comfortable with finding new web development clients by going to foreign businesses that I thought might need programming help, or by talking to foreigners at coffee shops and introducing myself. This was surprisingly effective.

In 2013, approximately at the one year mark in China, I decided it would be valuable to finish my college diploma. I wanted to personalize that I could move on to the next phase of life, confident that I finished the things I started. This was quite a shift, going from working every day to being a student.

Through a series of factors, I decided New York was likely the most similar place to Shanghai, given my attention on wanting to find a job and avoid moving back home with my parents in California. At the time, a girl I liked from college lived in New York, and I thought if we were in the same city, it was possible for us to get together. While initially that wasn’t the case, we are now married.

In New York, I enrolled in weekend Chinese language classes and a full-time contractor web development job. I hustled for the next few months and finished my college diploma, which was a major accomplishment given the series of options I had available. Soon from there, I began my first full-time salary based employment in the media industry as a senior level software engineer. Up until then, although I had never actually worked at a company, my contractor jobs had resulted in advancing my knowledge in areas that companies happened to need. I was always under the impression that I didn’t know enough, but given the constantly changing nature of software, what I had learned was the most important at the time. 

My NYC apartment home studio

From 2014 and on, the following years were largely based around my work and side projects. Most of my time outside of work was heavily engaged in my Buddhist community, where I would attend meetings, visit with friends, and help organize numerous weekly activities. Also being relatively new to New York, I attended meetups in the tech industry, so that I could learn what I didn’t know. I volunteered at conferences, co-organized hackathons, and made a weekly routine of reaching out to people online to meet in person.

At one point, the team I was on at work had been awarded recognition for performance, which made a major impact on my sense of accomplishment. At one level, I was very aware that we just did our job, but instead happened to be working on the right thing at that time. Our team had launched the New Yorker paywall, and in the process, made off without a hitch. A precious memory was the official launch, upon which some of us slept over in the office to see the job through. 

Throughout this time, I was more and more aware of the potential of starting a company. I still hadn’t taken any concrete steps, but absorbed the startup narrative around the difficulty of hiring. I took that and decided to maximize the number of people I could meet in case of some future event that would allow me to hire my friends. I got involved in many incredible communities of designers, programmers and entrepreneurs.

Notable in 2014, I started a routine of traveling to a new country every few months. I went to Ecuador and Peru for two weeks to go on a road trip with friends. I also went to Berlin fo a tech conference, of which I made some great friends. In the following years, I also made a short trip to India, where I was able to get away and focus on a personal project.

Hacking Journalism selfie

By 2015, my effort to “network” was in full swing. I wasn’t intentionally going to networking events, but for the meetups I did attend, I always tried to make one friend who I would then plan to get breakfast with at a later time. I had hit my stride at work, and while contributing to the advancement of my team, I didn’t want to limit myself to a job, and saw my engagement in side projects as a crucial factor in my learning. I was highly aware that the work I was doing at that point was only possible from the many side projects and contract gigs that I had done before, so my pace of out-of-work work was critical.

Around this time, I attended various classes and enrolled in extracurricular learning opportunities to propel my technical knowledge. I was aware that while I could be employed as a senior level software engineer, my colleagues had spent years studying computer science and had a technical foundation that I was unfamiliar with. In the big picture, this was not as important as long as I was proving my execution ability, but the awareness of this lacking knowledge had continued to motivate me.

Although I didn’t need to at the time, I found opportunities to contract with major companies I never imagined working with, and also continued my in-person public soliciting of web development.

In the midst of working, a major shift happened in which I started a serious relationship with my now wife. Prior to this point, much of my personal activities were barely keeping my head above water. Planning, communicating, and coordinating with others was not my forte. Acting on impulse and corralling others was my strength. Through a series of major mess ups as a boyfriend, such as double booking a celebratory birthday trip, and bailing on major holiday celebrations, I started maturing as a person who could truly consider and plan around the needs of others. This is something that until being in a relationship, I was able to get by without.

By 2016 and on, I was in a new job, working for the federal government of all places. I didn’t foresee that given my juvenile trouble making. The ability to work for a cause oriented organization was a big shift I was yearning for. Also given my workload, I was making a mental shift away from continuing so much contract work and wanted to double down on projects that aligned with some meaningful future state. I was tired of the transactional one-off clients, and wanted to see the work I did outlast me.

A number of my side projects from 2014 were still active, and I considered occasionally trying to translate the ideas into businesses. Two in particular were a street art tracking project and a service for publishers to engage readers who didn’t finish reading long content. In the midst, I also seriously pursued a project to help people working on side projects to gather an audience before they are ready to launch. Interestingly, now in 2020, this is a surprisingly common company theme at the cross section of a Twitter meets OnlyFans. Another major project was an attempt to codify my practice of meeting strangers for breakfast, but as a service to meet other professionals. Based on my exposure to the professional world, I only saw the value to expand around side projects, but now realize the larger potential of the idea.

Teaching a community center lesson on how to spray paint in Ecuador

My shift aware from short term pay and longer term self sustaining projects eventually resulted in my exploration into harder technical fields. This aligned with the popularization of machine learning. Specifically the advancements around image recognition, which made computer vision applications approachable. Given my prior interest in tracking street art, this drove me to look at working in a company in this space. With two years at the federal government, a startup seemed appealing.

Worth noting, while in China, I worked at a startup that was somewhat in a unique situation. As a company that had raised venture capital, they spent too much money, too early, without product market fit. When they finally got to a point where growth would be important for capturing market share, they couldn’t raise another round of funding. One cherished memory at this Chinese startup was our pure scrappiness as a company. The office we worked out of was in such state that although we had desks, the rest of the floor plan was under construction. This resulted in exposed live wires and the need to wear particle filtration masks while working, due to the construction materials in the air. Good times.

Coming back to now – the major events in my life recently were around getting married and the shifts leading up to it. As I saw gettin married as a major life event, I reconsidered my role at the startup I was at, based on my financial optionality. I gauged how much the stocks I owned could be worth, and realized that my time working at a larger tech company would likely be more valuable and less risky. While not initially having anything lined up, I contracted as a UX engineer at Google, which topped any other workplace I was at before. My specific responsibilities were on a team of contractors, but gave me insight into the big tech company ecosystem.

Outside of work, a major part of 2016 to 2018 was a youth festival organized by my Buddhist organization. We gathered 50,000 people at 9 different venues around the country. The largest which was in New York, the organization that I was most invested in. The entire festival and preparation required incredible effort in my life, which I felt deeply appreciative to have been able to make.

While the preparation for the festival put traveling and side projects on hold, I had been able to make steady progress on scraping street art off the internet. This fortunately got some attention, which eventually allowed me to participate in some events which were hugely formative in my interest around building a company. For one, I was able to meet face to face with many established founders of multi-billion dollar enterprises, and put a secular quantifiable form to my own interests in positive societal change.

Japan

I was also able to go to Japan, and Korea for work and a friends wedding. I feel there is much of the world which I am still yearning to see. Unmentioned earlier, I was able to take a very refreshing trip with my wife to Europe, upon which I visited Paris, Budapest, and briefly Lisbon.

By now, I am capturing a very wide period of my life, and largely focusing on elements surrounding work, career, and personal relationships. There is much more to unpack, and countless meaningful experiences and personal relationships I never touched on, but at a high level, this provides some perspective on how I think about the last ten years.

In most recent notable change, I ended my contract with google, started a full time job, and two months later left that job. I finally realized that I want to commit seriously to building a company, and believe there is never a right time, so I will do it now. In these last 10 years, I haven’t exclusively worked on one thing in isolation, and feel this is a major personal shift which was a long time coming. As I determine to grow my current project into something that I can both be proud of, I expect a lot more personal development to take place.

I started out writing thinking that I am considering what my next 10 years will look like, in addition to the 10 years of reflection. At this point, I know for certain that my next ten years will involve starting a family, and being more invested in the outcomes of my extended family, on both my wife and my side.

I do want to formulate some clear themes around how I imagine my next ten years to resonate, but for now this is what I have.

Filed Under: Personal, year in review Tagged With: 10 year reflection, aging, Birthday

Which expensive events permanently go digital?

August 21, 2020 by rememberlenny

One of the more interesting conversations I had recently was with a b2b sales director, who could clearly articulate the before and after changes from Covid. In short, companies account non-trivial budgets to send reps to in-person events, trade shows, and conferences, knowing there are unattributable affects for these costs. The side channel interactions between conference talks or the face time with in-person attendees is crucial for most sales pipelines.

Using a framework that I was introduced to by Daniel, from Pioneer, these events and their impact could fall into a two axis map of attributable results and costs. This means, there are results that are not attributable to a cause or are attributable. And similarly there are actions that are costly to take or not costly. Between these two axis’ you can have sales channels that are expensive and attributable, expensive and not-attributable, not expensive and attributable, and not expensive and not-attributable.

GPT-3 Make me a stratechery styled chart for attributable results and cost

As noted from the sales side, most efforts to generate leads are not attributable to specific actions, and costly. In other words, sending a sales rep to speak at a conference or signing up for a booth at a trade show are both costly, dont generate leads or sales at a normal cost per acquisition. The intangible benefits, such as exposure and branding are the justified reasons for spending money.

Outside of the individual sales rep perspective, a company’s yearly multimillion dollar event may be held at a huge cost and have relatively little attributable sales impact. While an event can give a slight sales bump, when compared to no event at all, it in no way justifies the huge cost for organizing the event. Think Saleforce’s Dreamforce or Google’s SPAN.

A few other examples of expensive not-attributable actions could be in the recruiting space. Engineering teams may sponsor large events or send employees to attend conferences for recruiting purposes, but don’t actually return with concrete recruiting leads. Again, there are often more benefits that are intangible, such as employee satisfaction, but the point is clear.

Due to the major shift from Covid of in-person events going digital, companies are paying closer attention to costs and attributable results. The digital event equivalents with attributable outcomes will be harder to justify with large costs in the future. If a $4 million event gave a 30% sales boost, but a $300,000 digital event can create a 20% equivalent boost, then the outstanding costs for the in-person event wont be returning immediately. Is the remaining 10% worth $3,700,000? No.

As many more events are moving online, a far greater number of previously in-person events will likely stay online. Considering the ratio of online event invites to registrations and actual attendees are continuing to shrink, the need refine the surface area of online events is becoming more important. A similar email invite and zoom link isn’t sufficient. The event speakers, email reminders, in-event promotion, post-event follow up, and summary resources are more important than ever.

One great write up Ross shared with me was on the webinar industry trends and the reception of the “Cambrian explosion” of digital events. (I had to do it)

You can find that here: https://www.trustradius.com/vendor-blog/the-impact-of-covid-19-on-digital-events

Hmm…which one should I attend?

Companies holding online events are now competing with the newest HBO hit-series release, but have much less to offer. Considering the competition, tools that help companies do promote, run and engage audiences for online events better are more important than ever. As we saw over the last few years that companies went from refined writing techniques over clearly defined visual brand guidelines. Now that well laid out photos and visual styles are not enough, companies are hiring in-house video producers to manage livestreams, tutorial content, and editing recorded events.

Video tools used to be generic timeline editors, like Final Cut or Adobe Premiere, but the consumer tools such as TikTok and livestream tools for the likes of Twitch are revealing the potential for improvement. How many webinars are using OBS to engage their audiences? As new demands are set for quality video content, the tooling will continue to evolve and become more niche.

As the events that were previously in-person move online, I expect a lot more companies to appear in this space.

Filed Under: projects, video

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Thoughts on my 33rd birthday
  • Second order effects of companies as content creators
  • Text rendering stuff most people might not know
  • Why is video editing so horrible today?
  • Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

Archives

  • August 2022
  • February 2021
  • October 2020
  • September 2020
  • August 2020
  • December 2019
  • March 2019
  • February 2019
  • November 2018
  • October 2018
  • April 2018
  • January 2018
  • December 2017
  • October 2017
  • July 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • August 2016
  • May 2016
  • March 2016
  • November 2015
  • October 2015
  • September 2015
  • July 2015
  • June 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • October 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012

Tags

  • 10 year reflection (1)
  • 100 posts (2)
  • 2013 (1)
  • academia (2)
  • Advertising (3)
  • aging (1)
  • Agriculture (1)
  • analytics (3)
  • anarchy (1)
  • anonymous (1)
  • api (1)
  • arizona (1)
  • Art (2)
  • art history (1)
  • artfound (1)
  • Artificial Intelligence (2)
  • balance (1)
  • banksy (1)
  • beacon (1)
  • Beacons (1)
  • beast mode crew (2)
  • becausewilliamshatner (1)
  • Big Data (1)
  • Birthday (1)
  • browsers (1)
  • buddhism (1)
  • bundling and unbundling (1)
  • china (1)
  • coding (1)
  • coffeeshoptalk (1)
  • colonialism (1)
  • Communication (1)
  • community development (1)
  • Computer Science (1)
  • Computer Vision (6)
  • crowdsourcing (1)
  • cyber security (1)
  • data migration (1)
  • Deep Learning (1)
  • design (1)
  • designreflection (1)
  • Developer (1)
  • Digital Humanities (2)
  • disruption theory (1)
  • Distributed Teams (1)
  • drawingwhiletalking (16)
  • education (3)
  • Email Marketing (3)
  • email newsletter (1)
  • Employee Engagement (1)
  • employment (2)
  • Engineering (1)
  • Enterprise Technology (1)
  • essay (1)
  • Ethics (1)
  • experiement (1)
  • fidgetio (38)
  • figma (2)
  • film (1)
  • film industry (1)
  • fingerpainting (8)
  • first 1000 users (1)
  • fonts (1)
  • forms of communication (1)
  • frontend framework (1)
  • fundraising (1)
  • Future Of Journalism (3)
  • future of media (1)
  • Future Of Technology (2)
  • Future Technology (1)
  • game development (2)
  • Geospatial (1)
  • ghostio (1)
  • github (2)
  • global collaboration (1)
  • god damn (1)
  • google analytics (1)
  • google docs (1)
  • Graffiti (23)
  • graffitifound (1)
  • graffpass (1)
  • growth hacking (1)
  • h1b visa (1)
  • hackathon (1)
  • hacking (1)
  • hacking reddit (2)
  • Hardware (1)
  • hiroshima (1)
  • homework (1)
  • human api (1)
  • I hate the term growth hacking (1)
  • ie6 (1)
  • ifttt (4)
  • Image Recognition (1)
  • immigration (1)
  • instagram (1)
  • Instagram Marketing (1)
  • internet media (1)
  • internet of things (1)
  • intimacy (1)
  • IoT (1)
  • iteration (1)
  • jason shen (1)
  • jobs (2)
  • jrart (1)
  • kickstart (1)
  • king robbo (1)
  • labor market (1)
  • Leonard Bogdonoff (1)
  • Literacy (1)
  • location (1)
  • Longform (2)
  • looking back (1)
  • los angeles (1)
  • Machine Learning (13)
  • MadeWithPaper (106)
  • making games (1)
  • management (1)
  • maps (2)
  • marketing (4)
  • Marketing Strategies (1)
  • Media (3)
  • medium (1)
  • mentor (1)
  • message (1)
  • mindmeld games (1)
  • Mobile (1)
  • Music (2)
  • Music Discovery (1)
  • neuroscience (2)
  • new yorker (1)
  • Newspapers (3)
  • nomad (1)
  • notfootball (2)
  • npaf (1)
  • odesk (1)
  • orbital (14)
  • orbital 2014 (14)
  • orbital class 1 (9)
  • orbitalnyc (1)
  • paf (2)
  • paid retweets (1)
  • painting (1)
  • physical web (1)
  • pitching (2)
  • popular (1)
  • post production (1)
  • Privacy (1)
  • process (1)
  • product (1)
  • Product Development (2)
  • product market fit (2)
  • Programming (6)
  • project reflection (1)
  • promotion (1)
  • prototype (17)
  • prototyping (1)
  • Public Art (1)
  • Public Speaking (1)
  • PublicArtFound (15)
  • Publishing (3)
  • Python (1)
  • quora (1)
  • Rails (1)
  • React (1)
  • React Native (1)
  • real design (1)
  • recent projects (1)
  • reddit (3)
  • redesign (1)
  • reflection (2)
  • rememberlenny (1)
  • Remote work (1)
  • replatform (1)
  • Responsive Emails (1)
  • retweet (1)
  • revenue model (1)
  • rick webb (1)
  • robert putnam (1)
  • ror (1)
  • rubyonrails (1)
  • segmenting audience (1)
  • Semanticweb (2)
  • Senior meets junior (1)
  • SGI (1)
  • Side Project (1)
  • sketching (22)
  • social capital (1)
  • social media followers (2)
  • social media manipulation (1)
  • social media marketing (1)
  • social reach (5)
  • software (3)
  • Soka Education (1)
  • Spatial Analysis (2)
  • spotify (1)
  • stanford (2)
  • Startup (21)
  • startups (7)
  • stree (1)
  • Street Art (4)
  • streetart (5)
  • stylometrics (1)
  • Technology (1)
  • thoughts (1)
  • Time as an asset in mobile development (1)
  • Towards Data Science (4)
  • TrainIdeation (42)
  • travel (1)
  • traveling (1)
  • tumblr milestone (2)
  • twitter (1)
  • twitter account (2)
  • typography (2)
  • unreal engine (1)
  • user behavior (1)
  • user experience (3)
  • user research (1)
  • user testing (1)
  • variable fonts (1)
  • video editing (2)
  • visual effects (1)
  • warishell (1)
  • Web Development (8)
  • webdec (1)
  • webdev (13)
  • windowed launch (1)
  • wordpress (1)
  • Work Culture (1)
  • workinprogress (1)
  • zoom (1)