• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Remember Lenny

Writing online

  • Portfolio
  • Email
  • Twitter
  • LinkedIn
  • Github
You are here: Home / Archives for Machine Learning

Machine Learning

How Public Art works

March 7, 2019 by rememberlenny

Details on my React-Native iOS application backed by a Ruby on Rails backend and some Python Jupyter notebook scripts

Public Art is an iOS application that helps you discover new nearby street art.

I’ve been working on this project on my own, but it has a lot of technical moving parts. I will explain how all of the moving parts work and what I’m planning to do in the near future. By the end of this article, I hope you have the awareness of recreating the same behavior for your own project.

Background

First of all, the reason I am working on a street app discovery tool is not to build the next urban media empire.

Although that sounds nice. I started trying to preserve graffiti and street art with as much associated metadata as possible for future art connoisseurs.

At one time I was trying to make a street art media empire, through questions. I wrote a series of fragmented blog posts in 2013, that later grew into this. Most of the posts can still be found here: http://newpublicartfoundation.com/

Given the rate at which photos are uploaded online, I felt it would be a great opportunity to preserve the otherwise transient form of cultural expression that is found around the world. I don’t have a secret surveillance agenda or political motive. I understand the privacy implications of preserving this information, as well as the complicated legal potholes involved.

That all being the case, I feel it’s important that someone preserves street art for the future, and that’s what I’ll go into below.

Frontend

The front-end portion of Public Art is a mix of an Expo based React Native application with a few interspersed Ruby on Rails and React web pages. The React Native application is a stock Expo application with a modern Redux/React Navigation architecture.

Beyond Redux and React Navigation, I used a number of packages to help with speeding up development. I used a UI library called NativeBase which provides some helper components, but eventually transitioned to using React Native Elements. Both of these libraries were not necessary, but provided enough structure to speed up my process. The main tool needed in any good UI library is a good layout structure. For React Native, the most common layout technique I saw was to use flexbox.

The app is composed of primarily loading images and displaying mapped points. I initially tried to use a few helper libraries for gracefully loading images, but eventually found the best performance around using the React Native Image tag as is.


For the map, I depended on the Expo framework’s React Native Maps integration. I explored ways to use Mapbox, but to stay within the Expo ecosystem, decided not to. That being said, the React Native Maps is a great library with all of the application control needed highly responsive maps.

As mentioned above, I used Redux for the primary datastore of the app. For managing the application’s side effects, I decided to use Redux Saga. In the past few React applications I’ve built, I aired on the side of using Redux Thunks. I noticed in my last project that the ability to test Thunks was overly complicated and wanted to pursue a more testable solution. After some research, I decided the best bet was Redux Saga. While this took getting used to, I do see the value and intuitive nature of the Saga based datastore/side effect architecture.

Backend

The back-end of Public Art is a combination of a few different “micro services”. In other words, it’s composed of a few web applications that talk to each other over http requests. In addition, I have a linux box that runs a series of shell scripts and cron jobs that provide important functionality that will eventually be replaced with another “service”.


The primary backend and authentication works as a Ruby on Rails application running a few gems which I’ll explain below. The Rails app runs on Heroku and uses the Heroku Postgres and Redis hosted services. While this is a costlier way to operate (especially because I have free credits in two different hosting providers), the convenience really makes a difference. It’s easy to deploy, manage credentials, and spin up/down workers.

For authentication management, I use the ruby gem Devise. Devise is a familiar gem for any Rails developer that needs any kind of user profile/authentication system. In my case, the Devise instance is setup with a User model, but all the views and business logic is triggered with a token based REST api. This was tricker to get setup than expected, but eventually became the most flexible way to control user activity.

For image uploads, I use the ruby gem Shrine. Shrine is a modern implementation of some other common image management gems like Carrierwave, Paperclip, and Refile. The Shrine gem plugs into Amazon’s S3 and creates a simple means of caching image display formats for easy use.


For worker management, I use the ruby gem Sidekiq, which is a Redis job manager. Sidekiq handles all of my asynchronous actions, of which there are many.

Finally, for location related actions, I use the ruby gem Geocoder. Geocoder hooks into the Microsoft Bing location API to do reverse geocoding. This means taking a latitude and longitude point, and inferring a address.

Overall, the Ruby application handles all of the business logic for creating users, saving images, managing locations, and aggregating all of the information for the iOS frontend to display. All of this happens using various api endpoints that communicate with JSON.

Data/Content

The Public Art app provides a way for any individual to view street art images nearby. This is accomplished by surfacing images that are geotagged with a longitude and latitude point. The images are gathered by user uploads, which are few, and scraping Instagram, which provides many.


The current method of dealing with this is very fragile and will be updated accordingly.

I have created a series of scripts that use a major image uploading platform as a datasource for discovering new images. I use the user generated categorization system to identify content that may be associated to street art or graffiti, and index the content that is associated with location metadata.

To manage the scraping process, I use a python script that manages rate-limits to the image service. The python script runs as a linux process on my server and stores images in the file system. Once the image and the post metadata is downloaded, the script does a second server request for the location details. The location is stored on the image as an ID and requires a second lookup to get the corresponding coordinates.

The downloaded images are uploaded to the Rails application and indexed accordingly through a second python script that runs in a Python Notebook. This is very unusual for any python developer, but surprisingly works very well.

I have a Jupyter server running on my linux machine that iterates through the scraped images, uploads them to the Public Art backend server, then prepares the location metadata and updates the corresponding images.

Machine Learning

Originally this project was meant to have more of a machine learning component, but getting all the other parts right has been priority. I’ll be doing some stuff related to search and object detection soon. I’ll also be using more model evaluation to handle flagging content that isn’t machine learning.

Conclusion

I’ve been doing some additional experiments with ads and promotion which I’ll write about some other time.


Filed Under: Uncategorized Tagged With: Machine Learning, Rails, React, React Native, Street Art

Weekly update February 19, 2019

February 20, 2019 by rememberlenny

Bi-weekly update for February 19th, 2019

Hey! I promised a bi-weekly update, so heres #2!

I did a lot of new development this past two weeks. To kick it off, I started experimenting with social media ads, selling physical products, built and released 21 versions of an app, majorly upgraded my backend application, and finally got the python scrape/import process working.

Heres the details:

Two weeks ago, I mentioned the progress on the machine learning tasks I was running, and I got the following message in the Pioneer August cohort.


In short, I was reminded that I can build a sustainable business around the art collected in this project, and was encouraged to consider what that would look like. I had previously written off the notion of selling anything, as I am more interested in the preservation of street art, but the suggestion alone got my mind racing.


I set up a landing page for selling street art posters, and set up a variety of social media based ads. I targeted people who are interested in street art and graffiti related hashtags, and set up a small but reasonable budget across the audiences. I noticed that a basic advertisement selling a poster for $26 got a decent. I did a very small (and unreasonable) experiment around “free” posters, to get a sense of how the general product was being received, vs the cost. Overall, this led to the next step.

I explored sourcing poster prints and found the margins of a totally hands off poster printing business to actually be very reasonable. Even with accounting for driving traffic with ads, there is a potential for building something that can generate income that could be funneled back to artists or photographers. I ordered one poster company’s print and was pleasantly surprised with the paper and print quality for the cost and photo resolution.


Shifting away from the new idea, I spent a lot of effort building out the actual street art tools. Last email was about the machine learning part of training a model to detect street art. This week, I focused more on building the tool to have user-generated content, and a pleasantly medium for consuming the images.

I decided to fully rebuild my original iOS app that was launched in 2014. Since launching, I hadn’t touched it, and it began collecting proverbial dust.

I had three parts that needed to be revitalized.

First, I needed a new app. Second, I needed fresh content to serve. And third, I needed a way to manage the content uploaded by users.

Regarding the app, I have been thinking about the execution of a good street art application for a while, so I knew what I wanted to do. Rather than focusing one something that manually needs managing and updating, I knew the only way I could be effective at building something was to make a self-updating, self-engaging app that used feeds of data to refresh itself to users. I also realize that the effective street art browsing method is not a regular cadence of opening the app, but rather a semi-regular summary email/notification that draws in an interested user.

I decided to use React Native and built out a four part app. The first part provides an editorially curated list of images from a larger community. Each day this create a fresh set of images that can be viewed. The second part of the app is a search based tool that lets people search for images in a specific place. The specific places are most interesting if they are your current location, but given that there are so many images being uploaded daily, so the third part of the app is a tool to view trending cities. Finally, the fourth part of the app is to allow users to upload content on their own and tag/label images.

Demo: https://youtu.be/wRWcbB3HfDY


Based on this model, I was able to get an authentication system up and running that allows users to sign up with a digital identity. This was built around a previous application I had, so I have a way to customize the experience of a user based on their browsing history and potential create tools around the user behavior. This system also allows me to have user generated content associated to an account, which is important for a variety of reasons.

For the daily update content, I took a shortcut and decided to feed images from the Reddit streetart subreddit. This group regularly uploads images at a steady cadence, so for now, this is my source of editorially curated content.

For the location search and trending locations, I was able to use my old API for street art in my 2014 app. The server that does the calculation of your current location and the nearest images to that point is still functional. The only problem is that all the old images are no longer accessible due to instagram’s platform changes in the past. As a result, I needed to rebuild the dataset around this server.

To do that, I have been scraping images for the past couple months, but haven’t been able to process them accordingly to refresh the local art detecting service. To get the images ready, I needed to make a small program that checked if the scraped images had associated location data, then upload the images to my application and create a location data point to correlate to the image. This was something I kept putting off, but finally took the time to do it.

I ended up writing the image uploader and location metadata association script in a python notebook. Being that it was such an iterative process to get right, I surprisingly found it useful. This was very unexpected.


I got the first batch of 10,000 images working and have many hundreds of thousands of images to process accordingly. Fortunately for the most recent batch, I scraped the images with location data. As a result, the images were slower to download, but I only had one remaining step after I was done.

For the remaining images, I need to add a step of checking if the downloaded images have corresponding location data. This shouldn’t take too long.

I have a few more possible tasks I need to figure out. One, is my image scraper saves images to a file system. The ideal situation would be to write a program that directly scrapes images and does all the other stuff needed to get the location data, and import the images into my application. Because there is so much rate limiting around the scraper, this is harder than it sounds. As a result, I need to make some kind of daemon that monitors my filesystem for new files and manages the scraped images. This daemon would ideally check which files were already checked/uploaded and then I would be able to let the scraper keep operating as it is.

Separately, I noticed that a lot of the newer images I have been getting have less accurate location metadata. I think this is part of the privacy/security shift on the Instagram platform. Although it’s not explicit, I imagine that the Instagram UI defaults to auto-populating locations that less specific when people are uploading images. As a result, I find that I will likely need to account for ways to properly associate images to their proper locations.

Lots of stuff happening and more to come!

Filed Under: Uncategorized Tagged With: Machine Learning, Public Art, Street Art

Weekly update Monday, February 4

February 20, 2019 by rememberlenny

Weekly update February 4, 2019

I will be doing my best to send a bi-weekly update on the progress around my efforts to build out a street art genealogy online, and create a tool for preserving otherwise undocumented street art.

So far in 2019, I have many exciting updates:

* Open-sourced tools for detecting street art in images using machine learning[¹]
* Published a 4000+ word article on how to train a convolutional neural network (CNN) to recognize street art in location tagged photos online.[²]
* Released a dataset of 6000 street art images and non-street art New York City images for CNN training.[³]
* Met with authors of three street art discovery/preservation apps [⁴]
* Presented the Public Art project and model training process at BetaWorks

I have been focusing on three major parts of this project: image collection, data analysis, and presentation.

For the image collection, I have continued to use Instagram scraping as the primary source for new images. Currently, this method has been effective for quickly gathering data to train deep learning models, but does not offer a long term solution for image aggregation. I have noticed a few times already that the primary methods for image collection have been shut down. Although I know this to be the case, I am able to gather hundreds of thousands of newly uploaded images a week, which is incomparable to any alternative user generated methods.

For data analysis, I have been analyzing images with associated location metadata by training deep learning models around artists and street art types (stencils, murals, letterform). I have also spent a lot of time with python notebooks, trying to find trends in certain periods of scraped images. I have been experimenting with “hot spot” detection based on images photographed in an specific area in a small amount of time. For example, detecting when new images are found from multiple people within a smaller frequency than previously found.

Finally, for presentation, I have been working on two methods: website and email newsletter. For the website, I have fortunately been able to quickly build out a web interface for loading and navigating images, but do not feel that the current methods fulfill my original intention of the project. As a result, I have not publically released any updates on this front.

For the newsletter, I have created a set of tools to determine if “new” street art is discovered in a place. Currently, I manually run 4 of python scripts to based on monitoring new images found from certain locations. I am working on establishing a steady stream of images I can monitor, to generate a weekly newsletter of “the best local street art” for respective interested subscribers.

I have been recently encouraged to consider the larger vision around the Public Art project. I am building out a steady infrastructure for housing and collecting street art, but do not had a plan for attracting an active audience. With the analogy of building a city, I am building a beautiful city with few inhabitants, but could develop a blossoming city as populous as Tokyo. Based on this, I will be consolidating my efforts.

I would appreciate thoughts around whether or not to build a healthy business around the audience interested in street art, or to follow a non-profit route. When considering the business route, I can clearly see a productization of the collected images with a high margin art, such as printed posters. The sales model around street art products offers the opportunity for driving paid traffic to the website, which would also generate traffic that would lead to user-generated image contributions. If I pursue the non-profit route, I will not have the luxury of buying growth.

Please send your thoughts to [email protected]<mailto:[email protected]>

[1]: https://github.com/rememberlenny/streetart-notstreetart

[2]: https://blog.floydhub.com/instagram-street-art/

[3]: https://www.floydhub.com/rememberlenny/datasets/streetart-notstreetart/3

[4]: https://www.canvsart.com/ & https://artpigeon.nyc/

[5]: https://betaworks-studios.com

Filed Under: Uncategorized Tagged With: Artificial Intelligence, Machine Learning

On Building an Instagram Street Art Dataset and Detection Model

February 13, 2019 by rememberlenny


What if you could pump all of the Instagram photos of Banksy’s artwork into a program that could pinpoint where the next one’s likely to be?

Well, we aren’t there quite yet, but there’s still some really cool stuff you can accomplish using image analysis and machine learning to better understand street art.

You can use machine learning models to detect whether an Instagram photo contains street art — even classify the type of street art. For example, you can make a classifier for stencil art, letterform, portrait murals, or mixed medium installations.

In this article, I will go over how to build a deep learning model using TensorFlow and Keras that accomplishes the task of generally detecting street art by using publicly available social media data on Instagram.

Results from the first version of my model. Notice that there are number of false positives. We’ll improve this later by cleaning up our respective datasets.

To my knowledge, there isn’t a publicly available dataset of street art or graffiti. But we’ll go over a few simple techniques for creating datasets from publicly available images on the Internet and social media — which will soon become indispensable tools in your machine learning toolkit. After reading this article, you’ll be able to leverage these methods to generate your own datasets for anything you need.

We’ll also learn how to build a TensorFlow model using Keras trained on our street art dataset. Then we will use this deep learning model to detect if new images contain street art photos.

Just pick an Instagram hashtag, grab some images, and train your deep learning model.

In the future, nearly everything will be photographed, and indirectly analyzable with machine learning. Learning how to train models to analyze this content yourself and understand the results is a superpower worth cultivating.

Overview of our Instagram street art dataset and model

Here’s a quick overview of our process:

  1. Build a street art deep learning image dataset using hashtag results for #streetart
  2. Use the images to build a deep learning model that will predict if images contain street art
  3. Clean the dataset and retrain the model for improved results

We’ll follow these three steps to build a real functioning model for classifying street art.The model here is based on the “Deep Residual Learning for Image Recognition” (2015) paper’s ResNet model and can be duplicated using other architectures.

You can view the finalized codebase in this Github repository. You can also open up the codebase (including the datasets I’ve collected) on FloydHub in a JupyterLab Workspace by clicking this button:

Building the image dataset

Let’s recap our goal. We want to build a TensorFlow deep learning model that will detect street art from a feed of random images. We will start will pulling hash-tagged images that offer a good preliminary dataset of street art. Then, we will use the same method for pulling images to train against that is not street art, but may resemble the images that we will encounter. Using the two sets of images, we will train our model and be able to classify whether images do or don’t contain street art.

The Internet is full of places to gather data to train models. If you are looking for specific images, Google image searches offer an unbeatable way to get numerous images one a single subject. PyImageSearch provides an excellent guide on building a deep learning dataset using Google images.

Street art results from Google images

Although the Google results method is straight forward, we want to emulate the process of building a model from social media data. As a result, we will train directly on the data source of choice: Instagram.

The same method discussed in the blog post linked above could be used for us. Simply load up Instagram’s web interface, search for the terms you want, then download all the images loaded in the browser.

We will go about it a little differently, in that we will use a library that simulates this process for us. While the method discussed below in one way to accomplish this, it is far from the only way.

If you want to skip downloading the street art images yourself, and just download a sample dataset, then skip to the next section titled: “Prepare your dataset”.

Getting street art images

We will be using a Python library called Instaloader that provides an easy interface for setting a hashtag or location point. In the process, it will use the rate-limited interval and download images needed to train our model.

Details of the library can be found here: https://instaloader.github.io

Let’s start by setting up our Python environment and installing instaloader.

pip install instaloader

Thats all we need to get our library working. Next we will get our own street art image dataset for training our model. The instaloader library will do the command line command below:

$ instaloader --no-videos --no-metadata-json --no-captions "#streetart"

This command can be better understood by reviewing the instaloader docs:


In short, we will be downloading images that have the hashtag “streetart”. We don’t want to download videos. The instaloader library will download the image’s caption data and metadata by default, so we also pass flags to prevent this.

Example of images gathered to help train our street art deep learning model

In an alternative use case, we could also download the metadata associated with each image to collect the image’s respective longitude and latitude points. This would allow us to associate images with a specific location. Not all images have this metadata, but the downloaded data is still a good start. Definitely something that’s worth exploring in a future project!

Once the command above runs, you will see the command slowly downloading images into a newly created folder called /#streetart. Once you have enough images (approximately 1000 is a good base), then you can stop the command.

Getting images to compare against


Next, we need to download images that are not street art related. This dataset will determine the environment in which our model will perform best. If we train against a series of identical types of images, such as pictures of cats or dogs, then our model will not be refined when deployed in a production environment.

In our hypothetical final case, we would like our model to perform well when classifying images from a location feed, so we will pull images from a city: New York. This will also be helpful as our models trains, because the image set from New York will contain content that will help the model differentiate certain urban subjects from the street art content.

Please note, when you use the method above, you will get a wide range of images. Due to forces beyond our control, some of these images may be not safe for work. 😬

To download images for a specific location, you must first find the location’s id. To find this, you can log into the Instagram web interface and do a search for the location you want. The URL will populate with the location’s ID, as seen below:

https://www.instagram.com/explore/locations/212988663/new-york-new-york/

As seen in the URL above, the New York location id is: 212988663. Using this location id, now initiate a new instaloader query:

$ instaloader --no-videos --no-metadata-json --no-captions "%212988663"


Similar to before, the command above will download images from the location id of choice, without any extra files. Let this process run for as long as you ran the previous command, so you have an even number of images in your two image sets.

Example of images that we will use in our training dataset of content that is “not street art”~

Prepare your dataset

If you followed the instructions above, you should have two directories titled /#streetart or /%212988663 respectively. First, because navigating non-alphanumeric in file names is a pain in the butt, lets rename those directories /streetart and /not_streetart respectively.

Now create a folder called /images and move the two folders. Your file directory should look like this:

.
└── dataset
└── images
├── not_streetart
└── streetart

If you didn’t follow the instructions above, you can download the dataset I’ve already prepared from FloydHub here:

https://www.floydhub.com/rememberlenny/datasets/streetart-notstreetart/

You can also run the corresponding Python notebook in a FloydHub Workspace. This will let you easily follow along with the model training code in Jupyter Notebook workspace.

Now that we have our images to train with, we need a way to break them up into the proper training, validation, and test sets. We can do this with the following script pulled from Adrian Rosebrock’s build script:

Code is adapted from Rosebrock’s build_dataset.py:

import random
import shutil
import os
from imutils import paths
# Set up paths for original images and training/validation/test
ORIGINAL_IMAGES = "dataset/images"
TRAINING_PATH = "dataset/training"
VALIDATION_PATH = "dataset/validation"
TESTING_PATH = "dataset/testing"
# Define the percentage of images used in training (80%),
# and the amount of validation data
TRAINING_SPLIT = 0.8
VALIDATION_SPLIT = 0.1

First we start with our imports and setting constants. imutils is a useful library created by Rosebrock for easy file and path manipulation.

# Access and shuffle original images
imagePaths = list(paths.list_images(ORIGINAL_IMAGES))
random.seed(42)
random.shuffle(imagePaths)
# Compute the training and testing split
i = int(len(imagePaths) * TRAINING_SPLIT)
trainingPaths = imagePaths[:i]
testingPaths = imagePaths[i:]
# Use part of the training data for validation
i = int(len(trainingPaths) * VALIDATION_SPLIT)
validationPaths = trainingPaths[:i]
trainingPaths = trainingPaths[i:]
# Define the datasets
datasets = [
("training", trainingPaths, TRAINING_PATH),
("validation", validationPaths, VALIDATION_PATH),
("testing", testingPaths, TESTING_PATH)
]

Next, we prepare our image files into the various training, validation and test sets. This allows us to have a unique set of images that are used for training and validation, then separately for testing.

for (dType, imagePaths, baseOutput) in datasets:
# If output directory doesn't exit, create it
if not os.path.exists(baseOutput):
os.makedirs(baseOutput)

# Loop over the input image paths
for inputPath in imagePaths:
# Extract the filename of the input image along with its
# corresponding class label
filename = inputPath.split(os.path.sep)[-1]
label = inputPath.split(os.path.sep)[-2]
# Build the path to the label directory
labelPath = os.path.sep.join([baseOutput, label])
# If label output directory doesn't exist, create it
if not os.path.exists(labelPath):
os.makedirs(labelPath)
    # Construct the path to the destination image and then copy
# the image itself
p = os.path.sep.join([labelPath, filename])
shutil.copy2(inputPath, p)

Finally, we should copy our training, validation and testing datasets in their own respective directories.

Dataset prep summary


To summarize, the script checks for your images in /dataset/images, then does the following:

  1. Load all the original downloaded images into memory, and shuffle them around to be in a random order.
  2. Split up the images into a following set: 80% reserved for training (10% of which will be for validation) and then the remaining 20% will be for testing.
  3. Make the respective directories and move images into /dataset/training, /dataset/validation, and /dataset/testing.

Note: All of your original images will stay in the /dataset/images folder.Once your dataset is split up, your images are ready to be used for training.

Train your deep learning model

Now we will use our dataset to train our model. Our deep learning model will be trained using Keras with a ResNet based CNN architecture.

The training code below is primarily taken from lessons in the Deep Learning for Computer Vision with Python book and, as you might have already guessed at this point, the PyImageSearch blog by Adrian Rosebrock. I really enjoy his blog and can’t recommend it enough for concrete code examples and practical tutorials. As a result, much of the points below will be summary points and a link to the final code.

Code is adapted from Rosebrock’s save_dataset.py, which we will call train_model.py.

from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import SGD
from pyimagesearch.resnet import ResNet
from sklearn.metrics import classification_report
from imutils import paths
import numpy as np
NUM_EPOCHS = 30
BATCH_SIZE = 32
TRAINING_PATH = "dataset/training"
VALIDATION_PATH = "dataset/validation"
TESTING_PATH = "dataset/testing"
MODEL_NAME = "streetart_classifer.model"
# Determine the total number of image paths in training, validation,
# and testing directories
totalTrain = len(list(paths.list_images(TRAINING_PATH)))
totalVal = len(list(paths.list_images(VALIDATION_PATH)))
totalTest = len(list(paths.list_images(TESTING_PATH)))

To start, we will import our dependencies and assign our constants.

We will be using Keras as our training library because it’s simple and provides a thorough API for our needs. The same steps could be replicated with other deep learning libraries like PyTorch and the fast.ai library. Keras provides a simple, module neural network library that can flexibly use various other machine learning frameworks as its backend. In my case, I will be using it with TensorFlow, but that shouldn’t matter. One note about Keras is it doesn’t support multi-GPU environments by default for training a network.

Note the pyimagesearch.resnet import: this is a folder containing our Keras implementation of our ResNet architecture.

# Initialize the training training data augmentation object
trainAug = ImageDataGenerator(
rescale=1 / 255.0,
rotation_range=20,
zoom_range=0.05,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05,
horizontal_flip=True,
fill_mode="nearest")

# Initialize the validation (and testing) data augmentation object
valAug = ImageDataGenerator(rescale=1 / 255.0)

Unlike the ImageNet or COCO, our dataset is relatively small. Because “street art” comes in many shapes, sizes, colors, and in a variety of environments, we will use data augmentation to help improve our training. Using the Keras image preprocessing API, we will create data augmentation objects to generate new images from our dataset with random modifications.

To learn more about data augmentation, see the Keras API documentation or take a look at a great blog post on data augmentation.

# Initialize the training generator
trainGen = trainAug.flow_from_directory(
TRAINING_PATH,
class_mode="categorical",
target_size=(64, 64),
color_mode="rgb",
shuffle=True,
batch_size=BATCH_SIZE)

# Initialize the validation generator
valGen = valAug.flow_from_directory(
VALIDATION_PATH,
class_mode="categorical",
target_size=(64, 64),
color_mode="rgb",
shuffle=False,
batch_size=BATCH_SIZE)

# Initialize the testing generator
testGen = valAug.flow_from_directory(
TESTING_PATH,
class_mode="categorical",
target_size=(64, 64),
color_mode="rgb",
shuffle=False,
batch_size=BATCH_SIZE)

Once the augmentation objects are setup, we will generate the new images on the fly for our training, validation, and testing datasets.

# Initialize our Keras implementation of ResNet model and compile it
model = ResNet.build(64, 64, 3, 2, (2, 2, 3),
(32, 64, 128, 256), reg=0.0005)
opt = SGD(lr=1e-1, momentum=0.9, decay=1e-1 / NUM_EPOCHS)
model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])

# Train our Keras model
H = model.fit_generator(
trainGen,
steps_per_epoch=totalTrain // BATCH_SIZE,
validation_data=valGen,
validation_steps=totalVal // BATCH_SIZE,
epochs=NUM_EPOCHS)

# Reset the testing generator and then use our trained model to
# make predictions on the data
print("[INFO] evaluating network...")
testGen.reset()
predIdxs = model.predict_generator(testGen,
steps=(totalTest // BATCH_SIZE) + 1)

# For each image in the testing set we need to find the index of the
# label with corresponding largest predicted probability
predIdxs = np.argmax(predIdxs, axis=1)
# show a nicely formatted classification report
print(classification_report(testGen.classes, predIdxs,
target_names=testGen.class_indices.keys()))

We build, compile, and train our ResNet model using the augmented street art dataset. Our training script will make predictions on the test dataset, then index the highest probability class on each prediction.

# Save the neural network to disk
print("[INFO] serializing network to '{}'...".format(MODEL_NAME))
model.save(MODEL_NAME)

The final results will be stored in a model named streetart_classifer.model which we can then deploy to classify new street art.

Training summary

In summary, the training script does the following:

  1. Import the various preprocessing services and helper utilities from libraries such as Keras. Also assign our constant values that we will use to access our dataset.
  2. Set up data augmentation objects to prepare our small dataset for training our deep learning model.
  3. Prepare our data augmentation objects to process our training, validation and testing dataset.
  4. Build, compile and train our ResNet model using our augmented dataset, and store the results on each iteration.
  5. Finally, save the trained model.

Using our trained street art model to classify new Instagram photos

Now that you have a model that detects street art effectively, we can see how it works on real images.We will use the following code below to evaluate the model against an image, and then render the results onto the image with the OpenCV python library.

from keras.preprocessing.image import img_to_array
from keras.models import load_model
import numpy as np
import random
import cv2
from imutils import build_montages
from IPython.display import Image

Assuming this is a new environment, first we load our libraries. We will use the Keras load_model function to use our newly created model and also load in some utility libraries for testing the model on a random sets of data. One convenient utility library, imutils provides a function that easily renders an image montage when fed a list of images.

MODEL_NAME = 'save_model.model'
MONTAGE_FILENAME = 'streetart_photo.png'
IMAGES_PATH = 'dataset/testing'
model = load_model(MODEL_NAME)
imagePaths = list(paths.list_images(IMAGES_PATH))
random.shuffle(imagePaths)
imagePaths = imagePaths[:1]
# initialize our list of results
results = []

Now we will set our constants referencing our model, rendered image name, and sample image path.

If we are in a Python Jupyter Notebook, we don’t need to load the model again.We will then load our test image path and randomly select an image to load. In the imagePaths[:1] definition, the 1 determines how many images to load, and can be increased according to the next part.

# loop over our sampled image paths
print("[INFO] evaluating model against test set...")
for p in imagePaths:
# load our original input image
orig = cv2.imread(p)
        # pre-process our image by converting it from BGR to RGB channel
# ordering (since our Keras mdoel was trained on RGB ordering),
# resize it to 64x64 pixels, and then scale the pixel intensities
# to the range [0, 1]
image = cv2.cvtColor(orig, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (64, 64))
image = image.astype("float") / 255.0
        # order channel dimensions (channels-first or channels-last)
# depending on our Keras backend, then add a batch dimension to
# the image
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
        # make predictions on the input image
pred = model.predict(image)
print(pred)
not_street_art_probability = pred.item(0)
street_art_probability = pred.item(1)
pred = pred.argmax(axis=1)[0]
        # an index of zero is the 'Not street art' label while an index of
# one is the 'Street art found' label
label = "Not street art ({0})".format(not_street_art_probability) if pred == 0 else "Street art found ({0})".format(street_art_probability)
color = (255, 0, 0) if pred == 0 else (0, 255, 0)
        # resize our original input (so we can better visualize it) and
# then draw the label on the image
orig = cv2.resize(orig, (800, 800))
cv2.putText(orig, label, (3, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
color, 2)
        # add the output image to our list of results
results.append(orig)

We will loop over the images in our test file paths. If we only load one image, this is redundant. We load the image and invoke our model’s predict function to get the probability score for the image containing street art. The array we get back has the “Not street art” score at index zero, and “Street art found” score on index one.

We generate a new image and apply text using OpenCV’s putText method. The text contains the image’s predicted label and the respective probability score.

Once the image is created, we append it onto the results array.

montage = build_montages(results, (800, 800), (1, 1))[0]
cv2.imwrite(MONTAGE_FILENAME, montage)
img = cv2.imread(MONTAGE_FILENAME)
Image(filename=MONTAGE_FILENAME)

Finally, we use the build_montages library to render the images in the results array to create a montage. You can learn about the build_montages function here. In our example, we are only rendering one image, but the third parameter for build_montages can be changed to determine the number of rows and columns of images to render from the image array source. This is what I used to make the large montage of labeled images earlier in this post.

For reference on using build_montages, you can see another example of using the build_montage function below:

# Example for using build_montage, and not part of the street art model evaluation
import cv2
from imutils import build_montages
IMAGES_PATH = 'dataset/testing'
imagePaths = list(paths.list_images(IMAGES_PATH))
imagePaths = imagePaths[:3]
img_list = []
for p in imagePaths:
# load our original input image
orig = cv2.imread(p)
img_list.append(orig)
# convert image list into a montage of 256x256 images tiled in a 3x1 montage
montages = build_montages(img_list, (256, 256), (3, 1))
# iterate through montages and display
for montage in montages:
cv2.imshow('montage image', montage)
cv2.waitKey(0)

Now back to the model evaluation.

We store the resulting montage as an image and then use the IPython helper function to render the image. If we run this script as an independent file, we could also invoke the OpenCV image display function.

The images below are three examples of running the model evaluation code. I ran the notebook with the code and saved the images from the notebook as a file.

Street art found (even though this chameleon tried to blend in)

More notorious street art found!

Not street art. But, yes, adorable puppy.

Our image classifier successfully detects street art, as seen in the two images above. Each of the images with a mural painting in the image are classified correctly.

When we run the same classifier against an obviously non-street art images, we receive the high probability “Not street art” result as well.

Viewing false positives

Some of the false positives from the model are images that depend on a person’s interpretation for street art. Based on the photos we trained against, photos of urban building landscapes and advertisements are incorrectly categorized.

In our case, this hashtag dataset is an imprecise classification system in general, since some people will tag things incorrectly or subjectively.

False positive of a blue building

Improving the results

Based on our initial dataset of hashtag and location datasets, I got roughly 60~65% accuracy with my training results. I was training on a 1080TI NVIDIA card with a batch size of 32 and 30 epochs over an hour.

To significantly improve this, one concrete step to take is manually review the dataset for the /dataset/images/streetart and /dataset/images/not_streetart. By reviewing the folders content, you can manually delete the images that are incorrectly labeled. In our case, because we pull data from Instagram and are using an undependable primary marker – the hashtags – to determine our dataset, we potentially have the wrong content appearing in the street art and not street art folders.

Once I reviewed the original crawled images, I found many images that had the hashtag streetart, but were not actually street art related. For example, images with no street art in the photo with the hashtag #streetart, which pollutes the model training. Similarly, in the /not_streetart folder, since New York is one of the most popular places for finding street art, I found pictures from the New York City location feed that were actually of street art or graffiti. To clean up the classifier, I had to delete these photos.

Cleaning the dataset

After cleaning up the datasets manually and running the training process again, I was able to achieve an improvement in the model to 80% accuracy. You can see the run data on FloydHub here. FloydHub automatically generates Training Metrics charts for each job when you’re using Keras:


Next steps for Instagram street art model

This was a practical application of building your own deep learning dataset around a social media source and training a model to classify the respective subject. While street art was the subject of this post, the same techniques could be used for subject of your choosing.

To take this further, the images being analyzed for street art could be segmented to differentiate paintings and their backgrounds. Scene recognition models would be hugely impactful at reducing the false positives caused by various indoor artwork. Similarly, using other models such as the PlacesCNN, we could identify the “street art-ness” that resonates through the finalized model.

If you’re interested in analyzing street art, you could expand this project even further by:

  • Get street art images labeled with the artist
  • Build a model for categorizing different kinds of street art
  • Explore the comment and image description metadata associated to the images with semantic analysis
  • Correlate the location metadata on images to find correlations or unique qualities by geography
  • Analyzed street art location data to find correlations or trends to social phenomena
  • Use the models in production to compare against live location feeds

Thanks to

FloydHub’s AI Writer program and Charlie Harrington for editorial support! Huge thanks to Adrian Rosebrock’s blog for the many code examples used. Thanks to Tyler Cowen’s Emergent Ventures for grant funding to explore this project and the Pioneer Tournament, led by Daniel Gross and Rishi Narang.


About Lenny

Lenny is building a digital genealogy of street art at Public Art. He’s scraping the internet and making a searchable database of street art around the world. One of his project’s goals is to amplify the voice of “protest art” against the constraints of censorship from autocratic governments. He’s also a FloydHub AI Writer.

You can follow along with Lenny on Twitter at @rememberlenny or his project newsletter http://publicart.io.

Links

  • Complete code examples
  • Building a deep learning dataset with Google Images
  • Instaloader
  • Prepared dataset used in this post
  • PyImageSearch blog post on building a deep learning model for medical image analysis with Keras
  • PyImageSearch blog post on saving a deep learning model build with Keras
  • PyImageSearch blog post on data augmentation with Keras
  • PlacesCNN
  • Public Art

Originally posted on FloydHub’s AI Writer’s blog: https://blog.floydhub.com/instagram-street-art/

Filed Under: Uncategorized Tagged With: Deep Learning, Digital Humanities, Instagram Marketing, Machine Learning, Street Art

Tracking street art with machine learning — updates

November 8, 2018 by rememberlenny

Mural from Reyes, Revok and Steel from MSK (https://www.fatcap.com/live/revok-steel-and-reyes.html)

Thank you for following the Public Art[⁰] project for building a genealogy around street art, using machine learning. This project is aiming to create a central place for documenting street art from around the world, and use modern image analysis techniques to build a historical reference of public art for the future.

Public Art iOS application

As a quick update, this project began in 2014, during Gary Chou’s Orbital bootcamp [¹], during which I built a series of small projects exploring how software and graffiti can co-exist in experimental side-projects. One of those side-projects was a experiment in crawling Instagram images and building out a iOS for browsing street art that is near you. This app, which is no longer fully functional, is still on the iOS app store [²].

This past August, I began participating in the Pioneer tournament, which is a monthly tournament built around a community of creative young people working on interesting projects around the globe. I decided to restart the project around documenting graffiti, by integrating my familiarity with machine learning.

Kickstarter page

In September, I ran a “Quickstarter”, which is a $100 Kickstarter project, and surprisingly, beyond friends, found a number of complete strangers who were interested in the project [³]. This project gave me confidence to further explore how street art and software could co-exist.

During this same project, I began continuing to crawl more images from public resources online, and similarly found a huge issue with my old methods. While I could still crawl Instagram, similarly to how I did in 2014, much of the metadata I needed for historical purposes was no longer available. Specially, I didn’t have access to the geographical data that was key for making these images useful. I wrote briefly on this here: On the post-centralization of social media [⁴].



PublicArt.io current website’s functional prototype

Since then, I have moved my focus away from building tools to crawl public resources and toward building a foundation on to which publicly documented street art can be stored online.

This will emulate the many photo sharing services already online, inspired by Flickr, Instagram, Imgur, to name a few. The focus of the service will be solely to document street art, help collect images on art pieces, view artists work, and provide public access to this data.

I am proud to announce that Tyler Cowen [⁵], of the Mercatus Center from George Mason University [⁶], has extended his Emergent Ventures fellowship to my project [⁷].

Emergent Ventures

Although this project was originally personally funded, I feel a greater confidence behind being able to extend my time into building out tools. With this grant, I am confident I am building something that has the ability to sustain its own costs and prove its worth.

Prior to my current state of exploration, I was experimenting with applying image feature extraction tools with embedding analysis techniques to compare how different street art pieces are similar or different. To over-simplify and explain briefly: Image feature extraction tools can take an image and quantify the presence of a single parameter, which represents a feature [⁸].

Im analyzing graffiti images with machine learning techniques to build a genealogy of graffiti.

I use a convolutional neural network based feature extraction and encoded results. This shows 5,623 photos cluster the similar artists based on 25,088 dimensions. pic.twitter.com/BcYLyCMZSq

— 👋 Leonard Bogdonoff (@rememberlenny) September 10, 2018

The parameter can be then simplified into a single number, which then can be compared across images. With machine learning tools, specifically the Tensorflow Inception library [⁹], tens of thousands of features can be extracted from a single image, then used to compare against the similar features from other images.

By taking these embeddings, I was able to generate very interesting three-dimensional space visuals that showed how certain artists are similar or different. In the most basic cases, stencil graffiti was mapped to the same dimensional space, while graffiti “bombs” or larger murals would map to similar multi-dimensional space respectively [¹⁰].

Using the hundreds of thousands of images I was able to crawl from Instagram before the geographical data was made inaccessible, I analyzed how the presence of street art around the world, over time [¹¹].

Video: 30 seconds of animated geolocation data for street art images taken around the world over time pic.twitter.com/YzTjdN2sLY

— 👋 Leonard Bogdonoff (@rememberlenny) November 2, 2018

This data, which was no longer associated to the actual images that were originally indexed — due to Instagram’s change in policy — provided insight into the presence of street art and graffiti around the world.

Interestingly, the image frequency also provided a visual which eludes to an obvious relationship between urban centers and street art. If this was analyzed further there may be clear correlations between street art and real estate value, community social ties, political engagement, and other social phenomena.

In the past few days, I have focused on synthesizing the various means with which I expect to use machine learning for analyzing street art. Because of the media’s misrepresentation of artificial intelligence and the broad meaning of machine learning in the technical/marketing field, I was struggling with what I meant myself.

Prior to this project’s incarnation, I had thought it would be possible to build out object detection models to recognize different types of graffiti in images. For example, an expression of vandalism is different than a community sanctioned mural. I also imagined it would be possible to build out ways of identifying specific letters in larger letter-form graffiti pieces. I believe it would be interesting to combine the well defined labels and data set with a variational auto-encoder to generate machine learning based letter-form pieces.

Going further, I thought it would be possible to use machine learning to detect when an image in a place was “new”, based on it not having been detected in previous images from a specific place. I thought it would also be interesting to find camera feeds to railway cars traveling across the US and build out a pipeline for capturing the graffiti on train cars, identifying the train cars serial number, and tracking how train cars and their respective art traveled the country.


All of the above points are practical expressions of the machine learning based analysis techniques.


While these are interesting projects, I have synthesized my focus to the following six points for the time being: recognizing artists work, tracking similar styles/influences, geo-localize images [¹²], categorize styles, correlate social phenomena, and find new art. Based on tracking the images, the content, the frequency of image images, and making this data available to others, I believe street art can create more value as it is and gain even more respect.

Based on recent work, I have gotten a fully functional application working that allows for users to create accounts, upload images, associate important metadata (artist/location/creation data) to images. While the user experience and design is not anywhere that I would be proud of, I will be moving forward with testing the current form with existing graffiti connoisseur.

As I continue to share about this project, please reach out if you have any interest directly or would like to learn more.


[0]: https://www.publicart.io
[1]: https://orbital.nyc/bootcamp/
[2]: http://graffpass.com
[3]: https://www.kickstarter.com/projects/rememberlenny/new-public-art-foundation-a-genealogy-of-public-st/updates
[4]: https://medium.com/@rememberlenny/on-the-instagram-api-changes-f9341068461e
[5]: http://marginalrevolution.com
[6]: https://mercatus.org/
[7]: https://marginalrevolution.com/marginalrevolution/2018/11/emergent-ventures-grant-recipients.html
[8]: https://en.wikipedia.org/wiki/Feature_extraction
[9]: https://www.tensorflow.org/tutorials/images/image_recognition
[10]: https://twitter.com/rememberlenny/status/1038992069094780928
[11]: Geographic data points — https://twitter.com/rememberlenny/status/1058426005357060096
[12]: Geolocalization — https://twitter.com/rememberlenny/status/1053626064738631681

Filed Under: Uncategorized Tagged With: Art, Graffiti, Machine Learning, Street Art, Towards Data Science

On the post-centralization of social media

October 24, 2018 by rememberlenny


Below are some running thoughts on the centralization of social media.

I have been reactivating many of my public art crawlers for publicart.io. In the process, I am only now realizing how drastic some of the post-Cambridge Analytica API changes from Facebook and Instagram are for the internet.

While the points below are easily arguable in the light of privacy and security consciousness, I believe the optimistic social media policy is lost. Or in another light, the wide access to existing social media content is reserved for commercial behavior that aligns with larger digital platform agendas. More on this later.

The changes for the respective digital social platforms is positive for privacy and security minded individuals. When considering the amount of unwise, and often unconscious, public data that many internet service users produce — this change provides a new responsibility in the hands of the service providers.

Instagram #streetart hashtag results

To name a few of the known benefits of this change: unconscious publicly listed Venmo purchases, DOX-ing and personal security threats -fueled by public social media presences, highly-scalable identity thefts — in terms of images used for fake social media profiles, or in the much more extreme cases — the suppression of human rights activists through tracking from social media postings.

I’m sure the same could be done for exploitative practices.

There is something magically that is only possible when the cost for media production is low and distributed across a large body of creators. This is now lost.

The changes in these APIs create an incentive for new communities to regain control for their fates. While some larger groups built on the existing social platforms (Facebook groups, etc) will not move — new groups that have a choice for selecting a place to build will consider their future portability. Perhaps this will result in a series of tools that help smaller communities to bootstrap and develop their respective competitive advantages against existing mediums.

Now that these changes are taking place, I have a few thoughts around advantages that new app developers can consider investing effort into:

1. User education
2. Privacy first development

The changes for big online social media companies, such as Facebook, LinkedIn, and others (RIP Google+), are forcing developers to consider developing tools in areas that were previously unimportant — such as login and bootstrapping social graphs.

Example API change notice

The API changes have been ammo in development communities as an additional reason for app developers to never build on an external entity. The overnight changes in some cases broke many digital businesses that depended on the readily available API endpoints that allowed for searching and accessing publicly displayed social media content.

Advertisement found in European airline magazine

For new app developers, choices around educating users on how their data is being used can be a huge advantage. While larger social entities are optimizing to serve more advertisements or get your attention at some future time — an advantage can be assuring your users they dont have to worry. By educating them about how the data is stored, what is being stored, and why — they will be reminded about what you are doing when they are not prompted by other applications. This itself becomes a implicit reminder when compared to the silent bulldozing in other places.

The privacy first development for new applications can falls inline with the first point, but distinguishes itself by never having to make drastic changes in the future — from the beginning you can give your users confidence, as opposed to a future privacy 180 at the expense of your platform users. Rather than exploiting growth or marketing techniques that require exploiting banal users, services could develop trust with end-users and respectively keep them coming back — if needed.

Reflecting over the changes — it definitely feels that we are in a new generation of internet applications. Just as there was a race to minimize the time on applications and maximize users, perhaps there will be a swing in the opposite direction — where services compete toward building the more wholesome trust with the individuals who take time to use their service over others.

Filed Under: Uncategorized Tagged With: Developer, Engineering, Ethics, Machine Learning, Privacy

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Thoughts on my 33rd birthday
  • Second order effects of companies as content creators
  • Text rendering stuff most people might not know
  • Why is video editing so horrible today?
  • Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

Archives

  • August 2022
  • February 2021
  • October 2020
  • September 2020
  • August 2020
  • December 2019
  • March 2019
  • February 2019
  • November 2018
  • October 2018
  • April 2018
  • January 2018
  • December 2017
  • October 2017
  • July 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • August 2016
  • May 2016
  • March 2016
  • November 2015
  • October 2015
  • September 2015
  • July 2015
  • June 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • October 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012

Tags

  • 10 year reflection (1)
  • 100 posts (2)
  • 2013 (1)
  • academia (2)
  • Advertising (3)
  • aging (1)
  • Agriculture (1)
  • analytics (3)
  • anarchy (1)
  • anonymous (1)
  • api (1)
  • arizona (1)
  • Art (2)
  • art history (1)
  • artfound (1)
  • Artificial Intelligence (2)
  • balance (1)
  • banksy (1)
  • beacon (1)
  • Beacons (1)
  • beast mode crew (2)
  • becausewilliamshatner (1)
  • Big Data (1)
  • Birthday (1)
  • browsers (1)
  • buddhism (1)
  • bundling and unbundling (1)
  • china (1)
  • coding (1)
  • coffeeshoptalk (1)
  • colonialism (1)
  • Communication (1)
  • community development (1)
  • Computer Science (1)
  • Computer Vision (6)
  • crowdsourcing (1)
  • cyber security (1)
  • data migration (1)
  • Deep Learning (1)
  • design (1)
  • designreflection (1)
  • Developer (1)
  • Digital Humanities (2)
  • disruption theory (1)
  • Distributed Teams (1)
  • drawingwhiletalking (16)
  • education (3)
  • Email Marketing (3)
  • email newsletter (1)
  • Employee Engagement (1)
  • employment (2)
  • Engineering (1)
  • Enterprise Technology (1)
  • essay (1)
  • Ethics (1)
  • experiement (1)
  • fidgetio (38)
  • figma (2)
  • film (1)
  • film industry (1)
  • fingerpainting (8)
  • first 1000 users (1)
  • fonts (1)
  • forms of communication (1)
  • frontend framework (1)
  • fundraising (1)
  • Future Of Journalism (3)
  • future of media (1)
  • Future Of Technology (2)
  • Future Technology (1)
  • game development (2)
  • Geospatial (1)
  • ghostio (1)
  • github (2)
  • global collaboration (1)
  • god damn (1)
  • google analytics (1)
  • google docs (1)
  • Graffiti (23)
  • graffitifound (1)
  • graffpass (1)
  • growth hacking (1)
  • h1b visa (1)
  • hackathon (1)
  • hacking (1)
  • hacking reddit (2)
  • Hardware (1)
  • hiroshima (1)
  • homework (1)
  • human api (1)
  • I hate the term growth hacking (1)
  • ie6 (1)
  • ifttt (4)
  • Image Recognition (1)
  • immigration (1)
  • instagram (1)
  • Instagram Marketing (1)
  • internet media (1)
  • internet of things (1)
  • intimacy (1)
  • IoT (1)
  • iteration (1)
  • jason shen (1)
  • jobs (2)
  • jrart (1)
  • kickstart (1)
  • king robbo (1)
  • labor market (1)
  • Leonard Bogdonoff (1)
  • Literacy (1)
  • location (1)
  • Longform (2)
  • looking back (1)
  • los angeles (1)
  • Machine Learning (13)
  • MadeWithPaper (106)
  • making games (1)
  • management (1)
  • maps (2)
  • marketing (4)
  • Marketing Strategies (1)
  • Media (3)
  • medium (1)
  • mentor (1)
  • message (1)
  • mindmeld games (1)
  • Mobile (1)
  • Music (2)
  • Music Discovery (1)
  • neuroscience (2)
  • new yorker (1)
  • Newspapers (3)
  • nomad (1)
  • notfootball (2)
  • npaf (1)
  • odesk (1)
  • orbital (14)
  • orbital 2014 (14)
  • orbital class 1 (9)
  • orbitalnyc (1)
  • paf (2)
  • paid retweets (1)
  • painting (1)
  • physical web (1)
  • pitching (2)
  • popular (1)
  • post production (1)
  • Privacy (1)
  • process (1)
  • product (1)
  • Product Development (2)
  • product market fit (2)
  • Programming (6)
  • project reflection (1)
  • promotion (1)
  • prototype (17)
  • prototyping (1)
  • Public Art (1)
  • Public Speaking (1)
  • PublicArtFound (15)
  • Publishing (3)
  • Python (1)
  • quora (1)
  • Rails (1)
  • React (1)
  • React Native (1)
  • real design (1)
  • recent projects (1)
  • reddit (3)
  • redesign (1)
  • reflection (2)
  • rememberlenny (1)
  • Remote work (1)
  • replatform (1)
  • Responsive Emails (1)
  • retweet (1)
  • revenue model (1)
  • rick webb (1)
  • robert putnam (1)
  • ror (1)
  • rubyonrails (1)
  • segmenting audience (1)
  • Semanticweb (2)
  • Senior meets junior (1)
  • SGI (1)
  • Side Project (1)
  • sketching (22)
  • social capital (1)
  • social media followers (2)
  • social media manipulation (1)
  • social media marketing (1)
  • social reach (5)
  • software (3)
  • Soka Education (1)
  • Spatial Analysis (2)
  • spotify (1)
  • stanford (2)
  • Startup (21)
  • startups (7)
  • stree (1)
  • Street Art (4)
  • streetart (5)
  • stylometrics (1)
  • Technology (1)
  • thoughts (1)
  • Time as an asset in mobile development (1)
  • Towards Data Science (4)
  • TrainIdeation (42)
  • travel (1)
  • traveling (1)
  • tumblr milestone (2)
  • twitter (1)
  • twitter account (2)
  • typography (2)
  • unreal engine (1)
  • user behavior (1)
  • user experience (3)
  • user research (1)
  • user testing (1)
  • variable fonts (1)
  • video editing (2)
  • visual effects (1)
  • warishell (1)
  • Web Development (8)
  • webdec (1)
  • webdev (13)
  • windowed launch (1)
  • wordpress (1)
  • Work Culture (1)
  • workinprogress (1)
  • zoom (1)