• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Remember Lenny

Writing online

  • Portfolio
  • Email
  • Twitter
  • LinkedIn
  • Github
You are here: Home / Archives for 2018

Archives for 2018

Tracking street art with machine learningā€Šā€”ā€Šupdates

November 8, 2018 by rememberlenny

Mural from Reyes, Revok and Steel from MSK (https://www.fatcap.com/live/revok-steel-and-reyes.html)

Thank you for following the Public Art[⁰] project for building a genealogy around street art, using machine learning. This project is aiming to create a central place for documenting street art from around the world, and use modern image analysis techniques to build a historical reference of public art for the future.

Public Art iOS application

As a quick update, this project began in 2014, during Gary Chou’s Orbital bootcamp [¹], during which I built a series of small projects exploring how software and graffiti can co-exist in experimental side-projects. One of those side-projects was a experiment in crawling Instagram images and building out a iOS for browsing street art that is near you. This app, which is no longer fully functional, is still on the iOS app store [²].

This past August, I began participating in the Pioneer tournament, which is a monthly tournament built around a community of creative young people working on interesting projects around the globe. I decided to restart the project around documenting graffiti, by integrating my familiarity with machine learning.

Kickstarter page

In September, I ran a ā€œQuickstarterā€, which is a $100 Kickstarter project, and surprisingly, beyond friends, found a number of complete strangers who were interested in the project [³]. This project gave me confidence to further explore how street art and software could co-exist.

During this same project, I began continuing to crawl more images from public resources online, and similarly found a huge issue with my old methods. While I could still crawl Instagram, similarly to how I did in 2014, much of the metadata I needed for historical purposes was no longer available. Specially, I didn’t have access to the geographical data that was key for making these images useful. I wrote briefly on this here: On the post-centralization of social media [⁓].



PublicArt.io current website’s functional prototype

Since then, I have moved my focus away from building tools to crawl public resources and toward building a foundation on to which publicly documented street art can be stored online.

This will emulate the many photo sharing services already online, inspired by Flickr, Instagram, Imgur, to name a few. The focus of the service will be solely to document street art, help collect images on art pieces, view artists work, and provide public access to this data.

I am proud to announce that Tyler Cowen [⁵], of the Mercatus Center from George Mason University [⁶], has extended his Emergent Ventures fellowship to my project [⁷].

Emergent Ventures

Although this project was originally personally funded, I feel a greater confidence behind being able to extend my time into building out tools. With this grant, I am confident I am building something that has the ability to sustain its own costs and prove its worth.

Prior to my current state of exploration, I was experimenting with applying image feature extraction tools with embedding analysis techniques to compare how different street art pieces are similar or different. To over-simplify and explain briefly: Image feature extraction tools can take an image and quantify the presence of a single parameter, which represents a feature [⁸].

Im analyzing graffiti images with machine learning techniques to build a genealogy of graffiti.

I use a convolutional neural network based feature extraction and encoded results. This shows 5,623 photos cluster the similar artists based on 25,088 dimensions. pic.twitter.com/BcYLyCMZSq

— Lenny Bogdonoff (@rememberlenny) September 10, 2018

The parameter can be then simplified into a single number, which then can be compared across images. With machine learning tools, specifically the Tensorflow Inception library [⁹], tens of thousands of features can be extracted from a single image, then used to compare against the similar features from other images.

By taking these embeddings, I was able to generate very interesting three-dimensional space visuals that showed how certain artists are similar or different. In the most basic cases, stencil graffiti was mapped to the same dimensional space, while graffiti ā€œbombsā€ or larger murals would map to similar multi-dimensional space respectively [¹⁰].

Using the hundreds of thousands of images I was able to crawl from Instagram before the geographical data was made inaccessible, I analyzed how the presence of street art around the world, over time [¹¹].

Video: 30 seconds of animated geolocation data for street art images taken around the world over time pic.twitter.com/YzTjdN2sLY

— Lenny Bogdonoff (@rememberlenny) November 2, 2018

This data, which was no longer associated to the actual images that were originally indexedā€Šā€”ā€Šdue to Instagram’s change in policyā€Šā€”ā€Šprovided insight into the presence of street art and graffiti around the world.

Interestingly, the image frequency also provided a visual which eludes to an obvious relationship between urban centers and street art. If this was analyzed further there may be clear correlations between street art and real estate value, community social ties, political engagement, and other social phenomena.

In the past few days, I have focused on synthesizing the various means with which I expect to use machine learning for analyzing street art. Because of the media’s misrepresentation of artificial intelligence and the broad meaning of machine learning in the technical/marketing field, I was struggling with what I meant myself.

Prior to this project’s incarnation, I had thought it would be possible to build out object detection models to recognize different types of graffiti in images. For example, an expression of vandalism is different than a community sanctioned mural. I also imagined it would be possible to build out ways of identifying specific letters in larger letter-form graffiti pieces. I believe it would be interesting to combine the well defined labels and data set with a variational auto-encoder to generate machine learning based letter-form pieces.

Going further, I thought it would be possible to use machine learning to detect when an image in a place was ā€œnewā€, based on it not having been detected in previous images from a specific place. I thought it would also be interesting to find camera feeds to railway cars traveling across the US and build out a pipeline for capturing the graffiti on train cars, identifying the train cars serial number, and tracking how train cars and their respective art traveled the country.


All of the above points are practical expressions of the machine learning based analysis techniques.


While these are interesting projects, I have synthesized my focus to the following six points for the time being: recognizing artists work, tracking similar styles/influences, geo-localize images [¹²], categorize styles, correlate social phenomena, and find new art. Based on tracking the images, the content, the frequency of image images, and making this data available to others, I believe street art can create more value as it is and gain even more respect.

Based on recent work, I have gotten a fully functional application working that allows for users to create accounts, upload images, associate important metadata (artist/location/creation data) to images. While the user experience and design is not anywhere that I would be proud of, I will be moving forward with testing the current form with existing graffiti connoisseur.

As I continue to share about this project, please reach out if you have any interest directly or would like to learn more.


[0]: https://www.publicart.io
[1]: https://orbital.nyc/bootcamp/
[2]: http://graffpass.com
[3]: https://www.kickstarter.com/projects/rememberlenny/new-public-art-foundation-a-genealogy-of-public-st/updates
[4]: https://medium.com/@rememberlenny/on-the-instagram-api-changes-f9341068461e
[5]: http://marginalrevolution.com
[6]: https://mercatus.org/
[7]: https://marginalrevolution.com/marginalrevolution/2018/11/emergent-ventures-grant-recipients.html
[8]: https://en.wikipedia.org/wiki/Feature_extraction
[9]: https://www.tensorflow.org/tutorials/images/image_recognition
[10]: https://twitter.com/rememberlenny/status/1038992069094780928
[11]: Geographic data pointsā€Šā€”ā€Šhttps://twitter.com/rememberlenny/status/1058426005357060096
[12]: Geolocalizationā€Šā€”ā€Šhttps://twitter.com/rememberlenny/status/1053626064738631681

Filed Under: Uncategorized Tagged With: Art, Graffiti, Machine Learning, Street Art, Towards Data Science

On the post-centralization of social media

October 24, 2018 by rememberlenny


Below are some running thoughts on the centralization of social media.

I have been reactivating many of my public art crawlers for publicart.io. In the process, I am only now realizing how drastic some of the post-Cambridge Analytica API changes from Facebook and Instagram are for the internet.

While the points below are easily arguable in the light of privacy and security consciousness, I believe the optimistic social media policy is lost. Or in another light, the wide access to existing social media content is reserved for commercial behavior that aligns with larger digital platform agendas. More on this later.

The changes for the respective digital social platforms is positive for privacy and security minded individuals. When considering the amount of unwise, and often unconscious, public data that many internet service users produceā€Šā€”ā€Šthis change provides a new responsibility in the hands of the service providers.

Instagram #streetart hashtagĀ results

To name a few of the known benefits of this change: unconscious publicly listed Venmo purchases, DOX-ing and personal security threats -fueled by public social media presences, highly-scalable identity theftsā€Šā€”ā€Šin terms of images used for fake social media profiles, or in the much more extreme casesā€Šā€”ā€Šthe suppression of human rights activists through tracking from social media postings.

I’m sure the same could be done for exploitative practices.

There is something magically that is only possible when the cost for media production is low and distributed across a large body of creators. This is now lost.

The changes in these APIs create an incentive for new communities to regain control for their fates. While some larger groups built on the existing social platforms (Facebook groups, etc) will not moveā€Šā€”ā€Šnew groups that have a choice for selecting a place to build will consider their future portability. Perhaps this will result in a series of tools that help smaller communities to bootstrap and develop their respective competitive advantages against existing mediums.

Now that these changes are taking place, I have a few thoughts around advantages that new app developers can consider investing effort into:

1. User education
2. Privacy first development

The changes for big online social media companies, such as Facebook, LinkedIn, and others (RIP Google+), are forcing developers to consider developing tools in areas that were previously unimportantā€Šā€”ā€Šsuch as login and bootstrapping social graphs.

Example API changeĀ notice

The API changes have been ammo in development communities as an additional reason for app developers to never build on an external entity. The overnight changes in some cases broke many digital businesses that depended on the readily available API endpoints that allowed for searching and accessing publicly displayed social media content.

Advertisement found in European airlineĀ magazine

For new app developers, choices around educating users on how their data is being used can be a huge advantage. While larger social entities are optimizing to serve more advertisements or get your attention at some future timeā€Šā€”ā€Šan advantage can be assuring your users they dont have to worry. By educating them about how the data is stored, what is being stored, and whyā€Šā€”ā€Šthey will be reminded about what you are doing when they are not prompted by other applications. This itself becomes a implicit reminder when compared to the silent bulldozing in other places.

The privacy first development for new applications can falls inline with the first point, but distinguishes itself by never having to make drastic changes in the futureā€Šā€”ā€Šfrom the beginning you can give your users confidence, as opposed to a future privacy 180 at the expense of your platform users. Rather than exploiting growth or marketing techniques that require exploiting banal users, services could develop trust with end-users and respectively keep them coming backā€Šā€”ā€Šif needed.

Reflecting over the changesā€Šā€”ā€Šit definitely feels that we are in a new generation of internet applications. Just as there was a race to minimize the time on applications and maximize users, perhaps there will be a swing in the opposite directionā€Šā€”ā€Šwhere services compete toward building the more wholesome trust with the individuals who take time to use their service over others.

Filed Under: Uncategorized Tagged With: Developer, Engineering, Ethics, Machine Learning, Privacy

Machine learning

April 11, 2018 by rememberlenny


I used to work at www.comet.ml, where they are building a really amazing tool for machine learning engineers. The short of it is they help track experiments using a single line of code that automagically saves everything to make your model reproducible. You can get great experiment logging and history without being tied to a single platform.

In my own time, I decided to put @cometml to the test while training a model for logo detection, using RetinaNet.

https://twitter.com/rememberlenny/status/983897644094447617

RetinaNet is a very high quality object detector that uses the ā€œFocal Loss for Dense Object Detectionā€ (by Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr DollĆ”) paper’s method. The Keras implementation of this paper can be found here: https://github.com/fizyr/keras-retinanet

Using this repository, I augmented the train.py script and added the www.comet.ml training example code. With a single line of code, I was able to get a live view of the model’s training process, access to the Keras RetinaNet code, a snapshot of the hyperparameters I used when running the program, and the results.


I am obviously biased because I work on this product, but I honestly have to say that I was impressed. While trying to run various training processes in the past, I repeatedly got stuck or underwhelmed by the output. With Comet.ml, I have a live interface into the training process that extends beyond the bash terminal. I do most of my development locally, but train models on a remote development machine I use. I can now train the model and monitor the overall process using Comet.ml, rather than needing to keep an open session to monitor any changes.

I’m reiterating the process I went through below for anyone else who wants to try.


Setup my environment

I started with setting up my remote environment and getting the code I would be using for the RetinaNet training process. I used a dataset of logos from various companies, very similar to something that can be found on Kaggle.

I had to install the RetinaNet library and various dependencies on my remote machine. Because my machine has a GPU, I installed the tensorflow-gpu version 1.4. I also updated the train.py script that RetinaNet uses to run and added the single line of code from Comet.ml to kick off the training process.

Side note: We make it really easy to connect your training process to your github repo. This way, once you figure out how to get the best training result, you can create a pull request that takes a snapshot of your code and hyperparameters.

Comet.ml gives you a code snippet that you can copy into any machine learning program. Just make sure the comet_ml import script is at theĀ top.

Install Comet.ml

All I had to do was copy the initialization script into the Keras RetinaNet train.py file and run the code. Thats it.


Track your experiment

Once Comet.ml is installed, the experiment code will pull all the hyper parameters you define during runtime. Whats also very cool is that depending if you have setup your Comet.ml project to be public or private, you will get a web URL to monitor the experiment training in real-time.

This is the terminal after I run the train.py file with hyperparameters. Notice the experiment URL is generated at theĀ top.

Monitor the experiment

In my case, the training process was estimated to take 10 hours, so I was able to detach from my active session and monitor Comet.ml. The dashboard for the experiment shows a live chart of the loss and accuracy metrics. It also provides a clear picture of the code and hyper parameters used to get the recorded result.

Example of the live loss and accuracy metrics being charted. https://www.comet.ml/lenny/retina-net/d83a54add91b4a10869977ac4d440d81

Example of the code being saved. https://www.comet.ml/lenny/retina-net/d83a54add91b4a10869977ac4d440d81

Example of the hyper parameters being logged. Notice how Comet pulls out the arguments used as well as their value. https://www.comet.ml/lenny/retina-net/d83a54add91b4a10869977ac4d440d81

Finally, Comet.ml also logs the terminal output. This is really useful because you can actually just have the Comet.ml website open rather than needing an active SSH session to the server running your experiment. You also don’t have to worry about accidentally disconnecting and losing your progress.

The ā€œOutputā€ tab on experiment pages show a live view and historical record of the terminal output while training. https://www.comet.ml/lenny/retina-net/d83a54add91b4a10869977ac4d440d81

Thats it!

Is itĀ useful?

If you struggle with managing your experiment history and reproducing results, you should definitely check out Comet.ml.

And let me know what you think!

Filed Under: Uncategorized

Learning to apply Machine Learning tools and Computer Vision as a Rails Developer

January 1, 2018 by rememberlenny

This is a follow up on my process of developing familiarity with computer vision and machine learning techniques. As a web developer (read as ā€œrails developerā€), I found this growing sphere exciting, but don’t work with these technologies on a day-to-day. This is month three of a two year journey to explore this field. If you haven’t read already, you can see Part 1 here: From webdev to computer vision and geo and Part 2 here: Two months exploring deep learning and computer vision.

Overall Thoughts

Rails developers are good at quickly building out web applications with very little effort. Between scaffolds, clear model-view-controller logic, and the plethora of ruby gems at your disposal, Rails applications with complex logic can be spun up in a short amount of time. For example, I wouldn’t blink at building something with creating an application that required user accounts, file uploads, and various feeds of data. I could even make it highly testable with great documentation. Between Devise, Carrierwave (or the many other file upload gems), Sidekiq, and all the other accessible gems, I would be up and running on Heroku within 15 minutes.

Now, add a computer vision or machine learning task and I would previously have no idea where to go. Even as I explore this space, I still struggle to find practical uses of the machine learning concepts (neural nets and deep learning) in practical applications. The most practical ideas are word association or image analysis. That being said, the interesting ideas (which I have yet to find practical applications for) are around trend detection and generative adversarial networks.

As a software engineer, I have found it hard to understand the practical values of machine learning in the applications I build. There is a lot of writing around models (in the machine learning sense, rather than the web application/database sense), neural net architecture, and research, but I haven’t seen as much around the practical applications for a web developer like my self. As a result, I decided to build out a small part of a project I’ve been thinking about for a while.

The project was meant to detect good graffiti on Instagram. The original idea was to use machine learning to quality what ā€œgood graffitiā€ looked like, and then run the machine learning model to detect and collect images. In concept the idea sounds great, but I have no idea how to ā€œtrain a machine learning modelā€, and I have very little sense of where to start.

I started building out a simple part of the project with the understanding that I would need to ā€œtrainā€ my ā€œmodelā€ on good graffiti. I picked a few Instagram accounts of good graffiti artists, where I knew I could find high quality images. After crawling the Instagram accounts (which took much longer than expected due to Instagram’s API restrictions) and analyzing the pictures, I realized a big problem at hand. The selected accounts were great, but had many non-graffiti images, mainly of people. To get the ā€œgood graffitiā€ images, I was first going to need to filter out the images of people.

By reviewing the pictures, I found that as much as four out of every ten images was of a person or had a person in it. As a result, before even starting the task of ā€œtrainingā€ a ā€œgood graffitiā€ ā€œmodelā€, I needed to just get a set of pictures that didn’t contain any people.

(Side note for non-machine learning people: I’m using quotations around certain words because you and I probably have an equal understanding of what those words actually mean.)

Rather than having a complicated machine learning application that did some complicated neural network-deep learning-artificial intelligence-stochastic gradient descent-linear regression-bayesian machine learning magic, I decided to simplify the project into building something that detected humans in a picture and flagged them. I realized that many examples of machine learning tutorials I had read before showed how to do this, so it was a matter of making those tutorials actually useful.

—

The application (with links to code)

I was using Ruby on Rails for the web applications that managed the database and rendered content. I did most of the image crawling of Instagram using Ruby, via a Redis library called Sidekiq. This makes running delayed tasks easy.

For the machine learning logic, I had a code example for object detection, using OpenCV, from a PyImageSearch.com tutorial. The code example was not complete, in that it detected one of 30 different items in the trained image model, one of them being people, and drew a box around the detected object. In my case, I slightly modified the example and placed it inside a simple web application based on Flask.

I made a Flask application with an endpoint that accepted a JSON blob with an image URL. The application downloaded the image URL and processed it through the code example that drew a bounding box around the detected object. I only cared about the code example detecting people, so I created a basic condition to give a certain response for detecting a person and a generic response for everything else.

This simple endpoint was the machine learning magic at work. Sadly, it was also the first time I’d seen a practical usable example of how the complicated machine learning ā€œstuffā€ integrates with the rest of a web application.

For those who are interested, the code for these are below.

rememberlenny/Flask-Person-Detector
Flask-Person-Detector – Flask based web application that provides a REST endpoint using OpenCV’s Deep Neural Network…github.com

—

Concluding Realizations

I was surprised that I hadn’t seen a simple Flask based implementation of a deep neural network before. I also feel like based on this implementation, when training a model isn’t involved, applying machine learning into any application is just like having a library with a useful function. I’m assuming that in the future, the separation of the model and the libraries for utilizing the models will be simplified, similar to how a library is ā€œimportedā€ or added using a bundler. My guess is some of these tools exist, but I am not deep enough yet to know about them.

Through reviewing how to access the object detection logic, I found a few services that seemed relevant, but eventually were not quite what I needed. Specifically, there is a tool called Tensorflow Serving, which seems like it should be a simple web server for Tensorflow, but isn’t quite simple enough. It possibly is what I need, but the idea of having a server or web application that solely runs Tensorflow is quite difficult to setup.

Web service based machine learning

A lot of the machine learning examples that I find online are very self-encompassed examples. The examples start with the problem, then provide the code to run the example locally. Often the image is an input provided by file path via command line interface, and the output is a python generated window that displays a manipulated image. This isn’t very useful as a web application, so making a REST endpoint seems like a basic next step.

Building the machine learning logic into a REST endpoint is not hard, but there are some considerations worth making. In my case, the server was running on a desktop computer with enough CPU and memory to process requests quickly. This might not always be the case, so a future endpoint might need to run tasks asynchronously using something like Redis. A HTTP request here would most likely hang and possibly timeout, so some basic micro-service logic would need to be considered for slow queries.

Binary expectations and machine learning brands

A big problem with the final application was that processed graffiti images were sometimes falsely flagged as people. When the painting contained features that looked like a person, such as a face or body, the object classifier was falsely flagging the paintings. Oppositely, there were times when pictures of people were not properly flagging the images of people.

Web applications require binary conclusions to take action. A image classifier will provide a percentage rating regarding whether or not the object detected is present. In larger object detection models, the classifier will have more than one object being recommended as being potentially detected. For example, there is a 90% chance of a person being in the photo, a 76% chance of a airplane, and a 43% chance of a giant banana. This isn’t very useful when the application processing the responses just needs to know whether or not something is present.

This brings up the importance of quality in any machine learning based process. Given that very few object classifiers or image based processes are 100% correct, the quality of an API is hard to gauge. When it comes to commercial implementations of these object classifier APIs, the brands of services will be largely impacted by the edge cases of a few requests. Because machine learning itself is so opaque, the brands of the service providers will be all the more important in determining how trustworthy these services are.

Oppositely, because the quality of a machine learning tasks vary so greatly, a brand may struggle showcasing its value to a user. When the binary quality of solving a machine learning task is pegged to a dollar amount, for example per API request, the ability to do something for free will be appealing. From the perspective of price, rolling your own free object classifier will be better than using a third-party service. The branded machine learning service market still has a long way to go before becoming clearly preferred over self-hosted implementations.

Specificity in object classification is very important

Finally, when it comes to any machine learning task, specificity is your friend. Specifically, when it comes to graffiti, its hard to qualify something that varies in form. Graffiti itself is a category that encompasses a huge range of visual compositions. Even a person may struggle to qualify what is or isn’t graffiti. When compared to detecting a face or a fruit, the specificity of the category is important.

The brilliance of WordNet and ImageNet are the strength of categorical specificities. By classifying the world through words and their relationships to one another, there is a way to qualify similarities and differences of images. For example, a pigeon is a type of bird, but different from a hawk. All the while, its completely different from an airplane or bee. The relationship between those things allow for clearly classifying what they would be. No such specificity exists in graffiti, but is needed to properly improve an object classifier.

Final final

Overall, the application works and was very helpful. Making this removed more of the mystery around how machine learning and image recognition services work. As I noted above, this process also made me much more aware of the shortfalls of these services and the places where this field is not yet defined. I definitely think this is something that all software engineers should learn how to do. Before the tools available become simple to use, I imagine there will be a good period of a complicated ecosystem to navigate. Similar to the browser wars before web standards were formed, there is going to be a lot of vying for market share amongst the machine learning providers. You can already see it between services from the larger companies like Amazon, Google and Apple. At the hardware and software level, this is also very apparent between Nvidia’s CUDA and AMD’s price appeal.

More to come!

Filed Under: Uncategorized

Primary Sidebar

Recent Posts

  • Thoughts on my 33rd birthday
  • Second order effects of companies as content creators
  • Text rendering stuff most people might not know
  • Why is video editing so horrible today?
  • Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

Archives

  • August 2022
  • February 2021
  • October 2020
  • September 2020
  • August 2020
  • December 2019
  • March 2019
  • February 2019
  • November 2018
  • October 2018
  • April 2018
  • January 2018
  • December 2017
  • October 2017
  • July 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • August 2016
  • May 2016
  • March 2016
  • November 2015
  • October 2015
  • September 2015
  • July 2015
  • June 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • October 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012

Tags

  • 10 year reflection (1)
  • 100 posts (2)
  • 2013 (1)
  • academia (2)
  • Advertising (3)
  • aging (1)
  • Agriculture (1)
  • analytics (3)
  • anarchy (1)
  • anonymous (1)
  • api (1)
  • arizona (1)
  • Art (2)
  • art history (1)
  • artfound (1)
  • Artificial Intelligence (2)
  • balance (1)
  • banksy (1)
  • beacon (1)
  • Beacons (1)
  • beast mode crew (2)
  • becausewilliamshatner (1)
  • Big Data (1)
  • Birthday (1)
  • browsers (1)
  • buddhism (1)
  • bundling and unbundling (1)
  • china (1)
  • coding (1)
  • coffeeshoptalk (1)
  • colonialism (1)
  • Communication (1)
  • community development (1)
  • Computer Science (1)
  • Computer Vision (6)
  • crowdsourcing (1)
  • cyber security (1)
  • data migration (1)
  • Deep Learning (1)
  • design (1)
  • designreflection (1)
  • Developer (1)
  • Digital Humanities (2)
  • disruption theory (1)
  • Distributed Teams (1)
  • drawingwhiletalking (16)
  • education (3)
  • Email Marketing (3)
  • email newsletter (1)
  • Employee Engagement (1)
  • employment (2)
  • Engineering (1)
  • Enterprise Technology (1)
  • essay (1)
  • Ethics (1)
  • experiement (1)
  • fidgetio (38)
  • figma (2)
  • film (1)
  • film industry (1)
  • fingerpainting (8)
  • first 1000 users (1)
  • fonts (1)
  • forms of communication (1)
  • frontend framework (1)
  • fundraising (1)
  • Future Of Journalism (3)
  • future of media (1)
  • Future Of Technology (2)
  • Future Technology (1)
  • game development (2)
  • Geospatial (1)
  • ghostio (1)
  • github (2)
  • global collaboration (1)
  • god damn (1)
  • google analytics (1)
  • google docs (1)
  • Graffiti (23)
  • graffitifound (1)
  • graffpass (1)
  • growth hacking (1)
  • h1b visa (1)
  • hackathon (1)
  • hacking (1)
  • hacking reddit (2)
  • Hardware (1)
  • hiroshima (1)
  • homework (1)
  • human api (1)
  • I hate the term growth hacking (1)
  • ie6 (1)
  • ifttt (4)
  • Image Recognition (1)
  • immigration (1)
  • instagram (1)
  • Instagram Marketing (1)
  • internet media (1)
  • internet of things (1)
  • intimacy (1)
  • IoT (1)
  • iteration (1)
  • jason shen (1)
  • jobs (2)
  • jrart (1)
  • kickstart (1)
  • king robbo (1)
  • labor market (1)
  • Leonard Bogdonoff (1)
  • Literacy (1)
  • location (1)
  • Longform (2)
  • looking back (1)
  • los angeles (1)
  • Machine Learning (13)
  • MadeWithPaper (106)
  • making games (1)
  • management (1)
  • maps (2)
  • marketing (4)
  • Marketing Strategies (1)
  • Media (3)
  • medium (1)
  • mentor (1)
  • message (1)
  • mindmeld games (1)
  • Mobile (1)
  • Music (2)
  • Music Discovery (1)
  • neuroscience (2)
  • new yorker (1)
  • Newspapers (3)
  • nomad (1)
  • notfootball (2)
  • npaf (1)
  • odesk (1)
  • orbital (14)
  • orbital 2014 (14)
  • orbital class 1 (9)
  • orbitalnyc (1)
  • paf (2)
  • paid retweets (1)
  • painting (1)
  • physical web (1)
  • pitching (2)
  • popular (1)
  • post production (1)
  • Privacy (1)
  • process (1)
  • product (1)
  • Product Development (2)
  • product market fit (2)
  • Programming (6)
  • project reflection (1)
  • promotion (1)
  • prototype (17)
  • prototyping (1)
  • Public Art (1)
  • Public Speaking (1)
  • PublicArtFound (15)
  • Publishing (3)
  • Python (1)
  • quora (1)
  • Rails (1)
  • React (1)
  • React Native (1)
  • real design (1)
  • recent projects (1)
  • reddit (3)
  • redesign (1)
  • reflection (2)
  • rememberlenny (1)
  • Remote work (1)
  • replatform (1)
  • Responsive Emails (1)
  • retweet (1)
  • revenue model (1)
  • rick webb (1)
  • robert putnam (1)
  • ror (1)
  • rubyonrails (1)
  • segmenting audience (1)
  • Semanticweb (2)
  • Senior meets junior (1)
  • SGI (1)
  • Side Project (1)
  • sketching (22)
  • social capital (1)
  • social media followers (2)
  • social media manipulation (1)
  • social media marketing (1)
  • social reach (5)
  • software (3)
  • Soka Education (1)
  • Spatial Analysis (2)
  • spotify (1)
  • stanford (2)
  • Startup (21)
  • startups (7)
  • stree (1)
  • Street Art (4)
  • streetart (5)
  • stylometrics (1)
  • Technology (1)
  • thoughts (1)
  • Time as an asset in mobile development (1)
  • Towards Data Science (4)
  • TrainIdeation (42)
  • travel (1)
  • traveling (1)
  • tumblr milestone (2)
  • twitter (1)
  • twitter account (2)
  • typography (2)
  • unreal engine (1)
  • user behavior (1)
  • user experience (3)
  • user research (1)
  • user testing (1)
  • variable fonts (1)
  • video editing (2)
  • visual effects (1)
  • warishell (1)
  • Web Development (8)
  • webdec (1)
  • webdev (13)
  • windowed launch (1)
  • wordpress (1)
  • Work Culture (1)
  • workinprogress (1)
  • zoom (1)