• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Remember Lenny

Writing online

  • Portfolio
  • Email
  • Twitter
  • LinkedIn
  • Github
You are here: Home / Archives for Graffiti

Graffiti

Tracking street art with machine learning — updates

November 8, 2018 by rememberlenny

Mural from Reyes, Revok and Steel from MSK (https://www.fatcap.com/live/revok-steel-and-reyes.html)

Thank you for following the Public Art[⁰] project for building a genealogy around street art, using machine learning. This project is aiming to create a central place for documenting street art from around the world, and use modern image analysis techniques to build a historical reference of public art for the future.

Public Art iOS application

As a quick update, this project began in 2014, during Gary Chou’s Orbital bootcamp [¹], during which I built a series of small projects exploring how software and graffiti can co-exist in experimental side-projects. One of those side-projects was a experiment in crawling Instagram images and building out a iOS for browsing street art that is near you. This app, which is no longer fully functional, is still on the iOS app store [²].

This past August, I began participating in the Pioneer tournament, which is a monthly tournament built around a community of creative young people working on interesting projects around the globe. I decided to restart the project around documenting graffiti, by integrating my familiarity with machine learning.

Kickstarter page

In September, I ran a “Quickstarter”, which is a $100 Kickstarter project, and surprisingly, beyond friends, found a number of complete strangers who were interested in the project [³]. This project gave me confidence to further explore how street art and software could co-exist.

During this same project, I began continuing to crawl more images from public resources online, and similarly found a huge issue with my old methods. While I could still crawl Instagram, similarly to how I did in 2014, much of the metadata I needed for historical purposes was no longer available. Specially, I didn’t have access to the geographical data that was key for making these images useful. I wrote briefly on this here: On the post-centralization of social media [⁴].



PublicArt.io current website’s functional prototype

Since then, I have moved my focus away from building tools to crawl public resources and toward building a foundation on to which publicly documented street art can be stored online.

This will emulate the many photo sharing services already online, inspired by Flickr, Instagram, Imgur, to name a few. The focus of the service will be solely to document street art, help collect images on art pieces, view artists work, and provide public access to this data.

I am proud to announce that Tyler Cowen [⁾], of the Mercatus Center from George Mason University [⁜], has extended his Emergent Ventures fellowship to my project [⁡].

Emergent Ventures

Although this project was originally personally funded, I feel a greater confidence behind being able to extend my time into building out tools. With this grant, I am confident I am building something that has the ability to sustain its own costs and prove its worth.

Prior to my current state of exploration, I was experimenting with applying image feature extraction tools with embedding analysis techniques to compare how different street art pieces are similar or different. To over-simplify and explain briefly: Image feature extraction tools can take an image and quantify the presence of a single parameter, which represents a feature [⁸].

Im analyzing graffiti images with machine learning techniques to build a genealogy of graffiti.

I use a convolutional neural network based feature extraction and encoded results. This shows 5,623 photos cluster the similar artists based on 25,088 dimensions. pic.twitter.com/BcYLyCMZSq

— Lenny Bogdonoff (@rememberlenny) September 10, 2018

The parameter can be then simplified into a single number, which then can be compared across images. With machine learning tools, specifically the Tensorflow Inception library [⁚], tens of thousands of features can be extracted from a single image, then used to compare against the similar features from other images.

By taking these embeddings, I was able to generate very interesting three-dimensional space visuals that showed how certain artists are similar or different. In the most basic cases, stencil graffiti was mapped to the same dimensional space, while graffiti “bombs” or larger murals would map to similar multi-dimensional space respectively [¹⁰].

Using the hundreds of thousands of images I was able to crawl from Instagram before the geographical data was made inaccessible, I analyzed how the presence of street art around the world, over time [šš].

Video: 30 seconds of animated geolocation data for street art images taken around the world over time pic.twitter.com/YzTjdN2sLY

— Lenny Bogdonoff (@rememberlenny) November 2, 2018

This data, which was no longer associated to the actual images that were originally indexed — due to Instagram’s change in policy — provided insight into the presence of street art and graffiti around the world.

Interestingly, the image frequency also provided a visual which eludes to an obvious relationship between urban centers and street art. If this was analyzed further there may be clear correlations between street art and real estate value, community social ties, political engagement, and other social phenomena.

In the past few days, I have focused on synthesizing the various means with which I expect to use machine learning for analyzing street art. Because of the media’s misrepresentation of artificial intelligence and the broad meaning of machine learning in the technical/marketing field, I was struggling with what I meant myself.

Prior to this project’s incarnation, I had thought it would be possible to build out object detection models to recognize different types of graffiti in images. For example, an expression of vandalism is different than a community sanctioned mural. I also imagined it would be possible to build out ways of identifying specific letters in larger letter-form graffiti pieces. I believe it would be interesting to combine the well defined labels and data set with a variational auto-encoder to generate machine learning based letter-form pieces.

Going further, I thought it would be possible to use machine learning to detect when an image in a place was “new”, based on it not having been detected in previous images from a specific place. I thought it would also be interesting to find camera feeds to railway cars traveling across the US and build out a pipeline for capturing the graffiti on train cars, identifying the train cars serial number, and tracking how train cars and their respective art traveled the country.


All of the above points are practical expressions of the machine learning based analysis techniques.


While these are interesting projects, I have synthesized my focus to the following six points for the time being: recognizing artists work, tracking similar styles/influences, geo-localize images [š²], categorize styles, correlate social phenomena, and find new art. Based on tracking the images, the content, the frequency of image images, and making this data available to others, I believe street art can create more value as it is and gain even more respect.

Based on recent work, I have gotten a fully functional application working that allows for users to create accounts, upload images, associate important metadata (artist/location/creation data) to images. While the user experience and design is not anywhere that I would be proud of, I will be moving forward with testing the current form with existing graffiti connoisseur.

As I continue to share about this project, please reach out if you have any interest directly or would like to learn more.


[0]: https://www.publicart.io
[1]: https://orbital.nyc/bootcamp/
[2]: http://graffpass.com
[3]: https://www.kickstarter.com/projects/rememberlenny/new-public-art-foundation-a-genealogy-of-public-st/updates
[4]: https://medium.com/@rememberlenny/on-the-instagram-api-changes-f9341068461e
[5]: http://marginalrevolution.com
[6]: https://mercatus.org/
[7]: https://marginalrevolution.com/marginalrevolution/2018/11/emergent-ventures-grant-recipients.html
[8]: https://en.wikipedia.org/wiki/Feature_extraction
[9]: https://www.tensorflow.org/tutorials/images/image_recognition
[10]: https://twitter.com/rememberlenny/status/1038992069094780928
[11]: Geographic data points — https://twitter.com/rememberlenny/status/1058426005357060096
[12]: Geolocalization — https://twitter.com/rememberlenny/status/1053626064738631681

Filed Under: Uncategorized Tagged With: Art, Graffiti, Machine Learning, Street Art, Towards Data Science

How I built a REST endpoint based Computer Vision task using Flask

December 31, 2017 by rememberlenny


This is a follow up on my process of developing familiarity with computer vision and machine learning techniques. As a web developer (read as “rails developer”), I found this growing sphere exciting, but don’t work with these technologies on a day-to-day. This is month three of a two year journey to explore this field. If you haven’t read already, you can see Part 1 here: From webdev to computer vision and geo and Part 2 here: Two months exploring deep learning and computer vision.

Overall Thoughts

Rails developers are good at quickly building out web applications with very little effort. Between scaffolds, clear model-view-controller logic, and the plethora of ruby gems at your disposal, Rails applications with complex logic can be spun up in a short amount of time. For example, I wouldn’t blink at building something that requires user accounts, file uploads, and various feeds of data. I could even make it highly testable with great documentation. Between Devise, Carrierwave (or the many other file upload gems), Sidekiq, and all the other accessible gems, I would be up and running on Heroku within 15 minutes.

Now, add a computer vision or machine learning task and I would have no idea where to go. Even as I explore this space, I still struggle to find practical applications for machine learning concepts (neural nets and deep learning) aside from word association or image analysis. That being said, the interesting ideas (which I have yet to find practical applications for) are around trend detection and generative adversarial networks.

Google search for “how to train a neural network”

As a software engineer, I have found it hard to understand the practical values of machine learning in the applications I build. There is a lot of writing around models (in the machine learning sense, rather than the web application/database sense), neural net architecture, and research, but I haven’t seen as much around the practical applications for a web developer like myself. As a result, I decided to build out a small part of a project I’ve been thinking about for a while.

The project was meant to detect good graffiti on Instagram. The original idea was to use machine learning to qualify what “good graffiti” looked like, and then run the machine learning model to detect and collect images. Conceptually, the idea sounds great, but I have no idea how to “train a machine learning model”, and I have very little sense of where to start.

I started building out a simple part of the project with the understanding that I would need to “train” my “model” on good graffiti. I picked a few Instagram accounts of good graffiti artists, where I knew I could find high quality images. After crawling the Instagram accounts (which took much longer than expected due to Instagram’s API restrictions) and analyzing the pictures, I realized a big problem at hand. The selected accounts were great, but had many non-graffiti images, mainly of people. To get the “good graffiti” images, I was first going to need to filter out the images of people.

The application I built to crawl Instagram created a frontend that displayed graffiti.

By reviewing the pictures, I found that as many as four out of every ten images was of a person or had a person in it. As a result, before even starting the task of “training” a “good graffiti” “model”, I needed to just get a set of pictures that didn’t contain any people.

(Side note for non-machine learning people: I’m using quotations around certain words because you and I probably have an equal understanding of what those words actually mean.)

Rather than having a complicated machine learning application that did some complicated neural network-deep learning-artificial intelligence-stochastic gradient descent-linear regression-bayesian machine learning magic, I decided to simplify the project into building something that detected humans in a picture and flagged them. I realized that many examples of machine learning tutorials I had read before showed me how to do this, so it was a matter of making those tutorials actually useful.

—

The application (with links to code)

I was using Ruby on Rails for the web applications that managed the database and rendered content. I did most of the image crawling of Instagram using Ruby, via a Redis library called Sidekiq. This makes running delayed tasks easy.

The PyImageSearch article used as reference is great and can be found at https://www.pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/

For the machine learning logic, I had a code example for object detection, using OpenCV, from a PyImageSearch.com tutorial. The code example was not complete, in that it detected one of 30 different items in the trained image model, one of them being people, and drew a box around the detected object. In my case, I slightly modified the example and placed it inside a simple web application based on Flask.

Link to Github: The main magic of the app

I made a Flask application with an endpoint that accepted a JSON blob with an image URL. The application downloaded the image URL and processed it through the code example that drew a bounding box around the detected object. I only cared about the code example detecting people, so I created a basic condition to give a certain response for detecting a person and a generic response for everything else.

This simple endpoint was the machine learning magic at work. Sadly, it was also the first time I’d seen a practical, usable example of how the complicated machine learning “stuff” integrates with the rest of a web application.

For those who are interested, the code for these are below.

https://github.com/rememberlenny/Flask-Person-Detector

—

Concluding Realizations

I was surprised that I hadn’t seen a simple Flask based implementation of a deep neural network before. I also feel like based on this implementation, when training a model isn’t involved, applying machine learning into any application is just like having a library with a useful function. I’m assuming that in the future, the separation of the model and the libraries for utilizing the models will be simplified, similar to how a library is “imported” or added using a bundler. My guess is some of these tools exist, but I am not deep enough yet to know about them.

https://www.tensorflow.org/serving/

Through reviewing how to access the object detection logic, I found a few services that seemed relevant, but eventually were not quite what I needed. Specifically, there is a tool called Tensorflow Serving, which seems like it should be a simple web server for Tensorflow, but isn’t quite simple enough. It possibly is what I need, but the idea of having a server or web application that solely runs Tensorflow is quite difficult to setup.

Web service based machine learning

A lot of the machine learning examples that I find online are very self-encompassed examples. The examples start with the problem, then provide the code to run the example locally. Often the image is an input provided by file path via command line interface, and the output is a python generated window that displays a manipulated image. This isn’t very useful as a web application, so making a REST endpoint seems like a basic next step.

Building the machine learning logic into a REST endpoint is not hard, but there are some things to consider. In my case, the server was running on a desktop computer with enough CPU and memory to process requests quickly. This might not always be the case, so a future endpoint might need to run tasks asynchronously using something like Redis. A HTTP request here would most likely hang and possibly timeout, so some basic micro-service logic would need to be considered for slow queries.

Binary expectations and machine learning brands

A big problem with the final application was that processed graffiti images were sometimes falsely flagged as people. When the painting contained features that looked like a person, such as a face or body, the object classifier was falsely flagging the paintings. Oppositely, there were times when pictures of people were not properly flagging the images as containing people.

[GRAFFITI ONLY] List of images that were noted to not have people. Note the images with the backs of people.

Web applications require binary conclusions to take action. A image classifier will provide a percentage rating regarding whether or not the object detected is present. In larger object detection models, the classifier will have more than one object being recommended as being potentially detected. For example, there is a 90% chance of a person being in the photo, a 76% chance of a airplane, and a 43% chance of a giant banana. This isn’t very useful when the application processing the responses just needs to know whether or not something is present.

[PEOPLE ONLY] List of images that were classified as people. Note the last one is a giant mural with features of a face.

This brings up the importance of quality in any machine learning based process. Given that very few object classifiers or image based processes are 100% correct, the quality of an API is hard to gauge. When it comes to commercial implementations of these object classifier APIs, the brands of services will be largely impacted by the edge cases of a few requests. Because machine learning itself is so opaque, the brands of the service providers will be all the more important in determining how trustworthy these services are.

Oppositely, because the quality of a machine learning tasks vary so greatly, a brand may struggle showcasing its value to a user. When the binary quality of solving a machine learning task is pegged to a dollar amount, for example per API request, the ability to do something for free will be appealing. From the perspective of price, rolling your own free object classifier will be better than using a third-party service. The branded machine learning service market still has a long way to go before becoming clearly preferred over self-hosted implementations.

Specificity in object classification is very important

Finally, when it comes to any machine learning task, specificity is your friend. Specifically, when it comes to graffiti, its hard to qualify something that varies in form. Graffiti itself is a category that encompasses a huge range of visual compositions. Even a person may struggle to qualify what is or isn’t graffiti. When compared to detecting a face or a fruit, the specificity of the category is important.

The brilliance of WordNet and ImageNet are the strength of categorical specificities. By classifying the world through words and their relationships to one another, there is a way to qualify similarities and differences of images. For example, a pigeon is a type of bird, but different from a hawk. All the while, its completely different from an airplane or bee. The relationship between those things allow for clearly classifying what they would be. No such specificity exists in graffiti, but is needed to properly improve an object classifier.

Final final

Overall, the application works and was very helpful. Making this removed more of the mystery around how machine learning and image recognition services work. As I noted above, this process also made me much more aware of the shortfalls of these services and the places where this field is not yet defined. I definitely think this is something that all software engineers should learn how to do. Before the tools available become simple to use, I imagine there will be a good period of a complicated ecosystem to navigate. Similar to the browser wars before web standards were formed, there is going to be a lot of vying for market share amongst the machine learning providers. You can already see it between services from the larger companies like Amazon, Google and Apple. At the hardware and software level, this is also very apparent between Nvidia’s CUDA and AMD’s price appeal.

More to come!

Filed Under: Uncategorized Tagged With: Computer Vision, Graffiti, Machine Learning, Programming, Python

How I Used Machine Learning to Inspire Physical Paintings

July 11, 2017 by rememberlenny

Since I was 15 years old, I have been painting graffiti under bridges and in abandoned buildings. I grew up in San Francisco when street art was booming, and inspired by the colors and aesthetic, I looked for ways to create art and taught myself to paint. As I got older, I discovered the graffiti communities on Flickr, and began making an effort to meet artists where I lived and share photos of my work online. As Tumblr grew in popularity, the community moved. Then Instagram emerged, and the community moved again.








“Gift”, Photo collection from 2010–2012. All photos taken and painted by author.

In recent years, I haven’t had the same leeway to paint in public. There was a greater cultural acceptance of street art when I lived abroad. Painting on walls was seen as beautification in areas where there was much demolition. When I moved back to the US, I started painting on larger canvases, and eventually moved toward spray cans and paint brushes.

Kawan’s “Sunset Running” project. Courtesy of Kawandeep Virdee.

Inspired by a project by Kawandeep Virdee, I photoshopped the paintings with motion blur filters, and modified the lighting effects. The result was a creative jumping-off point, enabling me to create a digitally inspired physical painting.

Last year, I started experimenting even more with digitally manipulated images, and their role in inspiring physical paintings. I began creating aesthetically beautiful images by taking classic paintings from the 18th and 19th century and running various photoshop filters over them. I found the color and contrast from these old paintings to be unmatched and beautiful.

Process for turning classic paintings into beautiful color muses.

I took the digital pieces I created and used them as the inspiration for painting new pieces by the classical paintings on a computer and then physically painting the remixed image.

The Ninth Wave hanging on my wall. Photo by author.

I continued my interest in graffiti, again using the digital space as a canvas, and spent a few months building out various software tools that I thought would be useful for graffiti artists. After creating such a large library of literally millions of paintings, I realized I wanted to do something more than just browse the images, so I started exploring different techniques around machine learning.

Painting based on Ray Collin’s Seascape series painting after digitally manipulating the photo. Photo by author via RememberLenny

I started teaching myself about the application of neural networks to do something called “style transfer,” which refers to the process of analyzing two images for the qualities that make the picture recognizable, then applying those qualities to another picture. This meant that I could replicate an image’s color, shapes, contrast, and various other features onto another. The most commonly recognized style transfer application is from Van Gogh’s “Starry Night” to any photograph.

Example from a GitHub repository that implements the Artistic Style Transfer algorithm using Torch. Credit: jcjohnson

Similar to my previous project of painting the digital sunset images, I processed pictures using the artistic style transfer algorithm and then painted them. Referring to the plethora of graffiti images I’d already collected, I used images of nature and processed them in the style of street art I thought looked interesting. The end result was an aesthetically interesting image I couldn’t imagine creating from scratch.

Process of creating the Artistic Style Transfer images.

It’s been a few months since I’ve done anything with this technique of mixing images and painting them. I hope the process depicted above can be a source of inspiration for other programmer-painters who enjoy mixing both practices.

Final version of the digitally inspired painting. Photo by author.

Below are a few examples of what an artist can create by combining street art images with photographs.











Photos by author.

Thanks to Edwin Morris for the grammatical review and Lam Thuy Vo for the ideas.

Filed Under: Uncategorized Tagged With: Artificial Intelligence, Graffiti, Machine Learning, Programming, Web Development

Countries of GraffPass users

October 11, 2014 by rememberlenny

Filed Under: Uncategorized Tagged With: Graffiti, location

api to graffiti data

October 9, 2014 by rememberlenny

I made an effective low fidelity API to the graffiti data.

You can now query with a location and get back results in chunks of 25. You can page through the results via the page attribute.

I’m conscious that there is currently no key/secret pass system. I have what I need for the application feed, so I’ll go from here.

http://www.graffpass.com/find.json?page=4&search=40.714352999999996%2C-74.005973

Filed Under: Uncategorized Tagged With: api, Graffiti, orbital, webdev

Deciding where to focus attention

October 8, 2014 by rememberlenny

I set up http://www.graffpass.com/find today. Its a way for people to find the graffiti that is closest to them, based on a pool of instagram photos I scraped over two weeks.

This is an concept experiment for a iPhone app I want to finish next week.

The application currently has a feature for users to find images using two mechanism: type a location or check against current location. When users check against their current location, I am using the browser’s geolocation API to track a person/device’s location.

I am trying to figure out how I want to move forward with this project. Instead of spreading myself too thin, Im considering focusing my attention on a city-by-city launch. Because I dont know the best cities to focus on, Im tracking the use of the “My Location” button.

Instead of building out a way for data to be saved off this service, I am using a very low fidelity database: Google Drive. After finding a wiki on “How to write to Google Docs using JavaScript”, I implemented a quick javascript function to save each location check result.

Filed Under: Uncategorized Tagged With: google docs, Graffiti, orbital, webdev

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Thoughts on my 33rd birthday
  • Second order effects of companies as content creators
  • Text rendering stuff most people might not know
  • Why is video editing so horrible today?
  • Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

Archives

  • August 2022
  • February 2021
  • October 2020
  • September 2020
  • August 2020
  • December 2019
  • March 2019
  • February 2019
  • November 2018
  • October 2018
  • April 2018
  • January 2018
  • December 2017
  • October 2017
  • July 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • August 2016
  • May 2016
  • March 2016
  • November 2015
  • October 2015
  • September 2015
  • July 2015
  • June 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • October 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012

Tags

  • 10 year reflection (1)
  • 100 posts (2)
  • 2013 (1)
  • academia (2)
  • Advertising (3)
  • aging (1)
  • Agriculture (1)
  • analytics (3)
  • anarchy (1)
  • anonymous (1)
  • api (1)
  • arizona (1)
  • Art (2)
  • art history (1)
  • artfound (1)
  • Artificial Intelligence (2)
  • balance (1)
  • banksy (1)
  • beacon (1)
  • Beacons (1)
  • beast mode crew (2)
  • becausewilliamshatner (1)
  • Big Data (1)
  • Birthday (1)
  • browsers (1)
  • buddhism (1)
  • bundling and unbundling (1)
  • china (1)
  • coding (1)
  • coffeeshoptalk (1)
  • colonialism (1)
  • Communication (1)
  • community development (1)
  • Computer Science (1)
  • Computer Vision (6)
  • crowdsourcing (1)
  • cyber security (1)
  • data migration (1)
  • Deep Learning (1)
  • design (1)
  • designreflection (1)
  • Developer (1)
  • Digital Humanities (2)
  • disruption theory (1)
  • Distributed Teams (1)
  • drawingwhiletalking (16)
  • education (3)
  • Email Marketing (3)
  • email newsletter (1)
  • Employee Engagement (1)
  • employment (2)
  • Engineering (1)
  • Enterprise Technology (1)
  • essay (1)
  • Ethics (1)
  • experiement (1)
  • fidgetio (38)
  • figma (2)
  • film (1)
  • film industry (1)
  • fingerpainting (8)
  • first 1000 users (1)
  • fonts (1)
  • forms of communication (1)
  • frontend framework (1)
  • fundraising (1)
  • Future Of Journalism (3)
  • future of media (1)
  • Future Of Technology (2)
  • Future Technology (1)
  • game development (2)
  • Geospatial (1)
  • ghostio (1)
  • github (2)
  • global collaboration (1)
  • god damn (1)
  • google analytics (1)
  • google docs (1)
  • Graffiti (23)
  • graffitifound (1)
  • graffpass (1)
  • growth hacking (1)
  • h1b visa (1)
  • hackathon (1)
  • hacking (1)
  • hacking reddit (2)
  • Hardware (1)
  • hiroshima (1)
  • homework (1)
  • human api (1)
  • I hate the term growth hacking (1)
  • ie6 (1)
  • ifttt (4)
  • Image Recognition (1)
  • immigration (1)
  • instagram (1)
  • Instagram Marketing (1)
  • internet media (1)
  • internet of things (1)
  • intimacy (1)
  • IoT (1)
  • iteration (1)
  • jason shen (1)
  • jobs (2)
  • jrart (1)
  • kickstart (1)
  • king robbo (1)
  • labor market (1)
  • Leonard Bogdonoff (1)
  • Literacy (1)
  • location (1)
  • Longform (2)
  • looking back (1)
  • los angeles (1)
  • Machine Learning (13)
  • MadeWithPaper (106)
  • making games (1)
  • management (1)
  • maps (2)
  • marketing (4)
  • Marketing Strategies (1)
  • Media (3)
  • medium (1)
  • mentor (1)
  • message (1)
  • mindmeld games (1)
  • Mobile (1)
  • Music (2)
  • Music Discovery (1)
  • neuroscience (2)
  • new yorker (1)
  • Newspapers (3)
  • nomad (1)
  • notfootball (2)
  • npaf (1)
  • odesk (1)
  • orbital (14)
  • orbital 2014 (14)
  • orbital class 1 (9)
  • orbitalnyc (1)
  • paf (2)
  • paid retweets (1)
  • painting (1)
  • physical web (1)
  • pitching (2)
  • popular (1)
  • post production (1)
  • Privacy (1)
  • process (1)
  • product (1)
  • Product Development (2)
  • product market fit (2)
  • Programming (6)
  • project reflection (1)
  • promotion (1)
  • prototype (17)
  • prototyping (1)
  • Public Art (1)
  • Public Speaking (1)
  • PublicArtFound (15)
  • Publishing (3)
  • Python (1)
  • quora (1)
  • Rails (1)
  • React (1)
  • React Native (1)
  • real design (1)
  • recent projects (1)
  • reddit (3)
  • redesign (1)
  • reflection (2)
  • rememberlenny (1)
  • Remote work (1)
  • replatform (1)
  • Responsive Emails (1)
  • retweet (1)
  • revenue model (1)
  • rick webb (1)
  • robert putnam (1)
  • ror (1)
  • rubyonrails (1)
  • segmenting audience (1)
  • Semanticweb (2)
  • Senior meets junior (1)
  • SGI (1)
  • Side Project (1)
  • sketching (22)
  • social capital (1)
  • social media followers (2)
  • social media manipulation (1)
  • social media marketing (1)
  • social reach (5)
  • software (3)
  • Soka Education (1)
  • Spatial Analysis (2)
  • spotify (1)
  • stanford (2)
  • Startup (21)
  • startups (7)
  • stree (1)
  • Street Art (4)
  • streetart (5)
  • stylometrics (1)
  • Technology (1)
  • thoughts (1)
  • Time as an asset in mobile development (1)
  • Towards Data Science (4)
  • TrainIdeation (42)
  • travel (1)
  • traveling (1)
  • tumblr milestone (2)
  • twitter (1)
  • twitter account (2)
  • typography (2)
  • unreal engine (1)
  • user behavior (1)
  • user experience (3)
  • user research (1)
  • user testing (1)
  • variable fonts (1)
  • video editing (2)
  • visual effects (1)
  • warishell (1)
  • Web Development (8)
  • webdec (1)
  • webdev (13)
  • windowed launch (1)
  • wordpress (1)
  • Work Culture (1)
  • workinprogress (1)
  • zoom (1)