• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Remember Lenny

Writing online

  • Portfolio
  • Email
  • Twitter
  • LinkedIn
  • Github
You are here: Home / Archives for video

video

Second order effects of companies as content creators

February 13, 2021 by rememberlenny

TLDR: There’s going to be a shit ton of temporarily useful content; businesses need to have a quick way to make the worthwhile content stand out. (Thanks Joseph)

There are two ideas I have been thinking about over the past few months that I want to document here. The first has to do with disruption, and the second has to do with novelty vs usefulness.

The late Clay Christensen pointed out that major technology shifts take place when there is a massive change in price or available quantity of something. When there was a 10x reduction in price for computers, the personal computing revolution took place. When price decreases, then the potential application for said technology significantly increase. When prices decrease and potential applications increase, then there the quantity of said technology also increases, and with it are new platforms for new tools and previously non-existent technologies.

I’ve heard this idea too many times to count, but something I never deeply thought about was reasons why certain ideas within the new platform do or don’t succeed. The specific area of interest for me is how the new-platform creates new opportunities that previously couldn’t exist – and therefore everything that wasn’t possible before is novel. When I use the term novel, I mean that since it is new, its significance seems highly valued relative to the past. That being said, if the new idea was to be projected out in the new reality, the novelty eventually wears off and is no longer as valuable as it once was.

Another way to think of it is the bundling and unbundling effect which I believe comes from a American economist who studied the cross-country trucking industry, and the standardization of freight trucks. Sadly, I don’t know the original reference. The summary point is that when getting packages safely and cheaply across the country was difficult, the creation of an 18 wheeler semi-truck (actually, I’m pretty sure the first version was not an 18 wheeler, so I’m butchering the facts) was a new innovation that solved a large problem. Once the trailer hauling semi-truck was widely accepted as the best option for hauling freight, the machine parts were modularized so that the trucking company could delegate part production and reduce costs. At the same time, the process of modularizing the trucks allowed new entrants into the trucking space, which created differentiated offerings and options for achieving cross country freight hauling.

World War I trucks circa 1917, manufactured by White Motor Company

Once the hauling freight was predictable, then new industries that couldn’t exist before emerged. The trucking industry created new modes of consumption with federated infrastructure. The big box stores, like Walmart or Costco, that couldn’t exist before were now possible (or the early 20th century parallel). As a consequence of the new modes of freight shipping creating a new possible economic model, parking lot based stores formed and the urban landscape changed. I’m extrapolated here, but I imagine a number of non-existent suburban environments formed due to this single freight based innovation.

The most exciting part of this to me is how the previously nascent technologies around freight positively exploded as a result of the new advancements.

And inline the new urban landscape fostered smaller environments that created new markets of supply and demand that didn’t previously exist. I imagine this also put stresses on areas that never expected such high throughput. I’m sure new roads needed to be developed, new laws, and in the process more and more markets emerged.

Intermodal containers waiting to be transferred between ships, trains, and trucks are stacked in holding areas at a shipping port.

I mention all of this because I find an interesting parallel when it comes to today and the state of what I’ve been thinking about in the proliferation of video content. In relation to traditional disruption theory, we had a major event that increased the quantity of video production, without necessarily changing the cost of production. Specifically, the cost of individual video creation has transformed the way we communicate personally – as seen through social media – but the impact on businesses has been delayed. Given that the cost of video production had been low for quite a while, the recent societal forces around Covid turned nearly every business into a video producer.

I think a good parallel is to think of Zoom as being the semi-truck and the need to interact with one another during a pandemic as being the freight to haul. Naturally, as Zoom became the fix-all solution, the tool that started as a necessity became fraught with problems for specific use cases. For one, Zoom wasn’t designed for highly interactive environments or planning events ahead of time. As a result, a world wind of new applications emerged to compliment the lacking areas, such as Slidio for questions or Luma for event planning.

zoom.png
Zoom’s usage change over one year

The need for higher fidelity interactions as the new normal of interacting over video was no longer a novelty. For one, people wanted to have fun. Games became a common way to spend time together, but beyond actual video games, an entire category of Zoom based games emerged. Things like GatherTown or Pluto Video came to light. Similarly, as the interaction model of being online together and participating in a shared online experience was normalized, existing platforms like Figma were used for non-traditional purposes to created synchronous shared experiences.

I bring these examples up because I think the ones above are obviously novel ideas that won’t be valuable for extended periods of time. That being said, the seed of the novelty comes from a core experience that will likely reemerge in various places in the future.

Continuing with the Zoom thread, the explosion of video communication and alternative ways of communicating with video has created the explosion in video content that previously didn’t exist. Of course, there is the novelty around having this increased content – which I believe will only continue to increase. As many things, the transition from in-person to digital was a one-way door for many industries, where the reduction in cost and surprising resilience in results has created a new normal.

I won’t try an exhaustive analysis around video, but one area that is particularly interesting in the machine learning space. Given the past decade, the huge improvements around computer vision, deep learning, and speech-to-text based research have been expensive and the applications have been very specific.

There is an exciting cross section around the sheer quantity of content being produced, and the widely accessible machine learning APIs that make it possible to analyze the content in a way that wasn’t previously possible cheaply. The novelty effect here seems ripe for misapplication. Specifically, the thing that wasn’t generally possible and the thing that previously wasn’t widely present are converging at the same time.

Real life example of an executive realizing how much money they are spending on novel tools they don’t actually use.

More specifically, there is a ton of video content being produced by individuals and businesses. Naturally, given the circumstances, applying the new technology to the ever growing problem seems like a good idea. Creating tooling that helps organize the new video content, or improve the reflection and recall of the content seems valuable. But is it just a novelty or a future necessity. If its a necessity, will it be a commodity and common practice, or specialize enough to demand variety in offerings?

Applying speech-to-text processing on video content is cheap, so analyzing everything creates a new possibility where the previously non-machine readable media is now itself a new resource that didn’t previously exist. Not only do we have video that didn’t exist before, but we have a whole category of text content that was non-existent. At the basic level, we can search video. At the more advanced level, we can quantify qualities about the video at scale. The valuable applications for this have been around phone call analysis in the past, but now are applicable to the video calls of sales teams or user interviews at product teams.

Again, this being the case, I find it falls to the trap of being a novel and immediately useful solution, but far from a long term value. I imagine the construct to think about the new machine readable video, is that it unlocks a new form of organizing. The ability to organize content is valuable at the surface, but the content being organized needs to be worth the effort.

For one, when video is being produced at scale, in the way that it is today, the shelf life of content is quite low. If you record a call for a product interview today, then when that product changes next month, the call is no longer valuable. Or at least the value of the call declines proportionally to how much the product changes. Lets call this shelf life.

The sales calls and user interviews from last year

Interestingly, as individuals – in the social media space – the notion of shelf life was given a sexy term: ephemerality. Since there is no cost to produce content individually, the negatives of having content with a short shelf life outweigh the positives, and among many other tings, this became a positive differentiator.

For businesses though, the creation of media is often far from free. Not only is the time of employees valuable, but the investment behind certain types of content are not immediately returned. So while the short shelf life video is already widely in use, the question that comes to my mind is: what are the future industries that don’t have a near term end in the road in this new space?

My general take is that organizing content is a limited venture. Having immediate access to content is useful, but at some level a novelty that requires a long shelf life value. Going back to the trucking analogy, while trucking is a commodity, you have industries that were enabled by trucking, like the Walmarts – which are bound to emerge in this new Zoom based freight model. I imagine while the shuttling of goods for commerce is important in freight, that point of value in the new ecosystem is going to be around helping businesses increase the value of their already existing content.

Might I even say, helping businesses “milk” the value.

Filed Under: video Tagged With: bundling and unbundling, disruption theory, video editing, zoom

Why is video editing so horrible today?

September 15, 2020 by rememberlenny

In the last three months, I have done more video post-production than I have done in the past 12 years. Surprisingly, in these years, nothing seems to have changed. Considering how much media is now machine analyzable content, such as audio and visual, I’m surprised there aren’t more patterns that make navigating and arranging video content faster. Beyond that, I’m surprised there isn’t more process for programmatically composing video in a polished complimentary way to the existing manual methods of arranging.

In 1918, when the video camera was created, if you filmed something and wanted to edit it, you took your footage, cut it and arranged it according to how you wanted it to look. Today, if you want to edit a video, you have to import the source assets into a specialty program (such as Adobe Premiere), and then manually view each item to watch/listen for the portion that you want. Once you have the sections of each imported asset, you have to manually arrange each item on a timeline. Of course a ton has changed, but the general workflow feels the same.

Should Critics and Festivals Give Editing Awards? Yes, and Here's Why |  IndieWire
Real life photo of me navigating my Premiere assets folders

How did video production and editing not get its digital-first methods of creation? Computing power has skyrocketed. Access to storage is generally infinite. And our computers are networked around the world. How is it that the workflow of import, edit, and export take so long?

The consumerization of video editing has simplified certain elements by abstracting away seemingly important but complicated components, such as the linearity of time. Things like Tiktok seem to be the most dramatic shift in video creation, in that the workflow shifts from immediate review and reshooting of video. Over the years, the iMovies and such have moved timelines, from horizontal representation of elapsed time into general blocks of “scenes” or clips. The simplification through abstraction is important for the general consumer, but reduces the attention to detail. This creates an aesthetic of its own, which seems to be the result of the changing of tools.

Where are all the things I take for granted in developer tools, like autocomplete or class-method search, in the video equivalent? What is autocomplete look like in editing a video clip? Where are the repeatable “patterns” I can write once, and reuse everywhere? Why does each item on a video canvas seem to live in isolation from one another, with no awareness of other elements or an ability to interact with each other?

My code editor searches my files and tried to “import” the methods when I start typing.

As someone who studied film and animation exclusively for multiple years, I’m generally surprised that the overall ways of producing content are largely the same as they have been 10 years ago, but also seemingly for the past 100.

I understand that the areas of complexity have become more niche, such as in VFX or multi-media. I have no direct experience with any complicated 3D rendering and I haven’t tried any visual editing for non-traditional video displays, so its a stretch to say film hasn’t changed at all. I haven’t touched the surface in new video innovation, but all considering, I wish some basic things were much easier.

For one, when it comes to visual layout, I would love something like the Figma “autolayout” functionality. If I have multiple items in a canvas, I’d like them to self-arrange based on some kind of box model. There should be a way to assign the equivalent of styles as “classes”, such as with CSS, and multiple text elements should be able to inherit/share padding/margin definitions. Things like flexbox and relative/absolute positioning would make visual templates significantly much easier and faster for developing fresh video content.

Currently I make visual frames in Figma, then export them because its so much easier than fumbling through the 2D translations in Premiere

I would love to have a “smarter” timeline that can surface “cues” that I may want to hook into for visual changes. The cues could make use of machine analyzable features in the audio and video, based on features detected in the available content. This is filled with lots of hairy areas, and definitely sounds nicer than it might be in actuality. At a basic example, the timeline could look at audio or a transcript and know when a certain speaker is talking. There are already services, such as Descript, that make seamless use of speaker detection. That should find some expression in video editing software. Even if the software itself doesn’t detect this information, the metadata from other software should be made use of.

The two basic views in Zoom. Grid or speaker.

More advanced would be to know when certain exchanges between multiple people are a self-encompassed “point”. Identifying when a “exchange” takes place, or when a “question” is “answered”, would be useful for title slides or lower-thirds with complimentary text.

Descript will identify speakers and color code the transcript.

If there are multiple shots of the same take, it would be nice to have the clips note where the beginning and end based on lining up the audio. Reviewing content shouldn’t be done in a linear fashion if there are ways to distinguish content of video/audio clip and compare it to itself or other clips.

In line with “cues”, I would like to “search” my video in a much more comprehensive way. My iPhone photos app lets me search by faces or location. How about that in my video editor? All the video clips with a certain face or background?

Also, it would be nice to generate these “features” with some ease. I personally dont know what it would take to train a feature detector by viewing some parts of a clip, labeling it, and then using the labeled example to find the other instances of similar kinds of visual content. I do know its possible, and that would be very useful for speeding up the editing process.

In my use case, I’m seeing a lot of video recordings of Zoom calls or webinars. This is another example of video content that generally looks the “same” and could be analyzed for certain content types. I would be able to quickly navigate through clips if I could be able to filter video by when the video is a screen of many faces viewed at once, or when only one speaker is featured at a time.

All of this to say, there is a lot of gaps in the tools available at the moment.

Filed Under: video Tagged With: film, post production, Programming, video editing

Which expensive events permanently go digital?

August 21, 2020 by rememberlenny

One of the more interesting conversations I had recently was with a b2b sales director, who could clearly articulate the before and after changes from Covid. In short, companies account non-trivial budgets to send reps to in-person events, trade shows, and conferences, knowing there are unattributable affects for these costs. The side channel interactions between conference talks or the face time with in-person attendees is crucial for most sales pipelines.

Using a framework that I was introduced to by Daniel, from Pioneer, these events and their impact could fall into a two axis map of attributable results and costs. This means, there are results that are not attributable to a cause or are attributable. And similarly there are actions that are costly to take or not costly. Between these two axis’ you can have sales channels that are expensive and attributable, expensive and not-attributable, not expensive and attributable, and not expensive and not-attributable.

GPT-3 Make me a stratechery styled chart for attributable results and cost

As noted from the sales side, most efforts to generate leads are not attributable to specific actions, and costly. In other words, sending a sales rep to speak at a conference or signing up for a booth at a trade show are both costly, dont generate leads or sales at a normal cost per acquisition. The intangible benefits, such as exposure and branding are the justified reasons for spending money.

Outside of the individual sales rep perspective, a company’s yearly multimillion dollar event may be held at a huge cost and have relatively little attributable sales impact. While an event can give a slight sales bump, when compared to no event at all, it in no way justifies the huge cost for organizing the event. Think Saleforce’s Dreamforce or Google’s SPAN.

A few other examples of expensive not-attributable actions could be in the recruiting space. Engineering teams may sponsor large events or send employees to attend conferences for recruiting purposes, but don’t actually return with concrete recruiting leads. Again, there are often more benefits that are intangible, such as employee satisfaction, but the point is clear.

Due to the major shift from Covid of in-person events going digital, companies are paying closer attention to costs and attributable results. The digital event equivalents with attributable outcomes will be harder to justify with large costs in the future. If a $4 million event gave a 30% sales boost, but a $300,000 digital event can create a 20% equivalent boost, then the outstanding costs for the in-person event wont be returning immediately. Is the remaining 10% worth $3,700,000? No.

As many more events are moving online, a far greater number of previously in-person events will likely stay online. Considering the ratio of online event invites to registrations and actual attendees are continuing to shrink, the need refine the surface area of online events is becoming more important. A similar email invite and zoom link isn’t sufficient. The event speakers, email reminders, in-event promotion, post-event follow up, and summary resources are more important than ever.

One great write up Ross shared with me was on the webinar industry trends and the reception of the “Cambrian explosion” of digital events. (I had to do it)

You can find that here: https://www.trustradius.com/vendor-blog/the-impact-of-covid-19-on-digital-events

Hmm…which one should I attend?

Companies holding online events are now competing with the newest HBO hit-series release, but have much less to offer. Considering the competition, tools that help companies do promote, run and engage audiences for online events better are more important than ever. As we saw over the last few years that companies went from refined writing techniques over clearly defined visual brand guidelines. Now that well laid out photos and visual styles are not enough, companies are hiring in-house video producers to manage livestreams, tutorial content, and editing recorded events.

Video tools used to be generic timeline editors, like Final Cut or Adobe Premiere, but the consumer tools such as TikTok and livestream tools for the likes of Twitch are revealing the potential for improvement. How many webinars are using OBS to engage their audiences? As new demands are set for quality video content, the tooling will continue to evolve and become more niche.

As the events that were previously in-person move online, I expect a lot more companies to appear in this space.

Filed Under: projects, video

The Unreal Engine in film production

August 14, 2020 by rememberlenny

Unreal engine used to shoot the Mandalorian using large LED screens synced with cameras

The film industry is an opaque producer of quality entertainment, which I have over-consumed in the past four months. Given the nature of how software and the internet has affected everything else I touch, I was curious how the entertainment industry has changed in recent years.

To learn more, I spoke with a few friends and acquaintances in the post-production side of video to better understand how the industry is changing. My thoughts were that software has changed the coordination costs around huge productions, but most of the tech innovation has been around media distribution (ie. streaming). From what I gleaned, there hasn’t been as much dramatic shifts in shooting a regular TV show or movie, as the majority of the film industry is executing on a complicated production cycle. That being said, there are a few areas that I found really interesting.

Specifically, I was surprised how much the Unreal Engine is being used.

One major change that I heard repeatedly mentioned was the impact of improving hardware capacity today, as compared to 5 or 10 years ago. The increasing speed and capacity of new graphics cards and processors has affected the scale of video detail which can be captured and processed. Previously a camera that could shoot in 4K resolution would need to be downsampled for playback and editing, due to sheer limits in compute. Now footage is recorded in 6K or even up to 9K in some cases, during which cameras are capable of immediate playback, as opposed to the previous delays which resulted in footage needing to be processed before viewed.

Inline with having higher compute available on set, the most dramatic area where a new category of film production has emerged is in the “previs” space. This is not technically the post-production side, but instead the effort done before a shoot to plan a set, so as to capture the desired scenes rendered live with visual effects. Specifically speaking, tools such as the Unreal Engine, which was originally made as a a graphical processor for the Unreal shoot-em-up video game, is used to generate a simulated scene with characters placed as actors, and the camera shots planned. By using a simulated environment, the shot planning can become more intentional, visual effects planned better, and the overall production set to be better understood by everyone involved.

https://www.youtube.com/watch?v=gW1OTxYDvlQ

Motion capture is also huge.

The same Unreal Engine is used to eliminate significant post-production visual effects work by using motion capture to simulate environments “on camera”, through screens placed in the background of shots. The ability to capture final results “on camera” is a huge cost saving, as it reduces the need for editing footage after the fact. For example, when shooting a car driving down the road, rather than only having green screens in the window, and replacing the content with footage during the post-production process, the green screens can be replaced with large LED screens, and display the relevant visual content as defined in the Unreal Engine environment. By coordinating the screen placement, camera placement, lighting instruments, and designated camera shot, the actual filmed shot becomes a live simulation of sorts that can avoid a major post-production step.

An example media production that showcased this on-camera environment was in the Disney “baby Yoda” hit series, The Mandalorian, during which large LED walls were used in concert with simulated worlds, to capture fantasy landscapes on-camera.

https://youtu.be/ysIOi_MP_cs?t=82

Huge thanks to Matt Baker, Greg Silverman, Andrew Prasse, and the others who spoke with me on this.

Filed Under: video Tagged With: film industry, unreal engine, visual effects

Primary Sidebar

Recent Posts

  • Thoughts on my 33rd birthday
  • Second order effects of companies as content creators
  • Text rendering stuff most people might not know
  • Why is video editing so horrible today?
  • Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

Archives

  • August 2022
  • February 2021
  • October 2020
  • September 2020
  • August 2020
  • December 2019
  • March 2019
  • February 2019
  • November 2018
  • October 2018
  • April 2018
  • January 2018
  • December 2017
  • October 2017
  • July 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • August 2016
  • May 2016
  • March 2016
  • November 2015
  • October 2015
  • September 2015
  • July 2015
  • June 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • October 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012

Tags

  • 10 year reflection (1)
  • 100 posts (2)
  • 2013 (1)
  • academia (2)
  • Advertising (3)
  • aging (1)
  • Agriculture (1)
  • analytics (3)
  • anarchy (1)
  • anonymous (1)
  • api (1)
  • arizona (1)
  • Art (2)
  • art history (1)
  • artfound (1)
  • Artificial Intelligence (2)
  • balance (1)
  • banksy (1)
  • beacon (1)
  • Beacons (1)
  • beast mode crew (2)
  • becausewilliamshatner (1)
  • Big Data (1)
  • Birthday (1)
  • browsers (1)
  • buddhism (1)
  • bundling and unbundling (1)
  • china (1)
  • coding (1)
  • coffeeshoptalk (1)
  • colonialism (1)
  • Communication (1)
  • community development (1)
  • Computer Science (1)
  • Computer Vision (6)
  • crowdsourcing (1)
  • cyber security (1)
  • data migration (1)
  • Deep Learning (1)
  • design (1)
  • designreflection (1)
  • Developer (1)
  • Digital Humanities (2)
  • disruption theory (1)
  • Distributed Teams (1)
  • drawingwhiletalking (16)
  • education (3)
  • Email Marketing (3)
  • email newsletter (1)
  • Employee Engagement (1)
  • employment (2)
  • Engineering (1)
  • Enterprise Technology (1)
  • essay (1)
  • Ethics (1)
  • experiement (1)
  • fidgetio (38)
  • figma (2)
  • film (1)
  • film industry (1)
  • fingerpainting (8)
  • first 1000 users (1)
  • fonts (1)
  • forms of communication (1)
  • frontend framework (1)
  • fundraising (1)
  • Future Of Journalism (3)
  • future of media (1)
  • Future Of Technology (2)
  • Future Technology (1)
  • game development (2)
  • Geospatial (1)
  • ghostio (1)
  • github (2)
  • global collaboration (1)
  • god damn (1)
  • google analytics (1)
  • google docs (1)
  • Graffiti (23)
  • graffitifound (1)
  • graffpass (1)
  • growth hacking (1)
  • h1b visa (1)
  • hackathon (1)
  • hacking (1)
  • hacking reddit (2)
  • Hardware (1)
  • hiroshima (1)
  • homework (1)
  • human api (1)
  • I hate the term growth hacking (1)
  • ie6 (1)
  • ifttt (4)
  • Image Recognition (1)
  • immigration (1)
  • instagram (1)
  • Instagram Marketing (1)
  • internet media (1)
  • internet of things (1)
  • intimacy (1)
  • IoT (1)
  • iteration (1)
  • jason shen (1)
  • jobs (2)
  • jrart (1)
  • kickstart (1)
  • king robbo (1)
  • labor market (1)
  • Leonard Bogdonoff (1)
  • Literacy (1)
  • location (1)
  • Longform (2)
  • looking back (1)
  • los angeles (1)
  • Machine Learning (13)
  • MadeWithPaper (106)
  • making games (1)
  • management (1)
  • maps (2)
  • marketing (4)
  • Marketing Strategies (1)
  • Media (3)
  • medium (1)
  • mentor (1)
  • message (1)
  • mindmeld games (1)
  • Mobile (1)
  • Music (2)
  • Music Discovery (1)
  • neuroscience (2)
  • new yorker (1)
  • Newspapers (3)
  • nomad (1)
  • notfootball (2)
  • npaf (1)
  • odesk (1)
  • orbital (14)
  • orbital 2014 (14)
  • orbital class 1 (9)
  • orbitalnyc (1)
  • paf (2)
  • paid retweets (1)
  • painting (1)
  • physical web (1)
  • pitching (2)
  • popular (1)
  • post production (1)
  • Privacy (1)
  • process (1)
  • product (1)
  • Product Development (2)
  • product market fit (2)
  • Programming (6)
  • project reflection (1)
  • promotion (1)
  • prototype (17)
  • prototyping (1)
  • Public Art (1)
  • Public Speaking (1)
  • PublicArtFound (15)
  • Publishing (3)
  • Python (1)
  • quora (1)
  • Rails (1)
  • React (1)
  • React Native (1)
  • real design (1)
  • recent projects (1)
  • reddit (3)
  • redesign (1)
  • reflection (2)
  • rememberlenny (1)
  • Remote work (1)
  • replatform (1)
  • Responsive Emails (1)
  • retweet (1)
  • revenue model (1)
  • rick webb (1)
  • robert putnam (1)
  • ror (1)
  • rubyonrails (1)
  • segmenting audience (1)
  • Semanticweb (2)
  • Senior meets junior (1)
  • SGI (1)
  • Side Project (1)
  • sketching (22)
  • social capital (1)
  • social media followers (2)
  • social media manipulation (1)
  • social media marketing (1)
  • social reach (5)
  • software (3)
  • Soka Education (1)
  • Spatial Analysis (2)
  • spotify (1)
  • stanford (2)
  • Startup (21)
  • startups (7)
  • stree (1)
  • Street Art (4)
  • streetart (5)
  • stylometrics (1)
  • Technology (1)
  • thoughts (1)
  • Time as an asset in mobile development (1)
  • Towards Data Science (4)
  • TrainIdeation (42)
  • travel (1)
  • traveling (1)
  • tumblr milestone (2)
  • twitter (1)
  • twitter account (2)
  • typography (2)
  • unreal engine (1)
  • user behavior (1)
  • user experience (3)
  • user research (1)
  • user testing (1)
  • variable fonts (1)
  • video editing (2)
  • visual effects (1)
  • warishell (1)
  • Web Development (8)
  • webdec (1)
  • webdev (13)
  • windowed launch (1)
  • wordpress (1)
  • Work Culture (1)
  • workinprogress (1)
  • zoom (1)