Bi-weekly update for February 19th,Ā 2019
Hey! I promised a bi-weekly update, so heres #2!
I did a lot of new development this past two weeks. To kick it off, I started experimenting with social media ads, selling physical products, built and released 21 versions of an app, majorly upgraded my backend application, and finally got the python scrape/import process working.
Heres the details:
Two weeks ago, I mentioned the progress on the machine learning tasks I was running, and I got the following message in the Pioneer August cohort.
In short, I was reminded that I can build a sustainable business around the art collected in this project, and was encouraged to consider what that would look like. I had previously written off the notion of selling anything, as I am more interested in the preservation of street art, but the suggestion alone got my mind racing.
I set up a landing page for selling street art posters, and set up a variety of social media based ads. I targeted people who are interested in street art and graffiti related hashtags, and set up a small but reasonable budget across the audiences. I noticed that a basic advertisement selling a poster for $26 got a decent. I did a very small (and unreasonable) experiment around āfreeā posters, to get a sense of how the general product was being received, vs the cost. Overall, this led to the next step.
I explored sourcing poster prints and found the margins of a totally hands off poster printing business to actually be very reasonable. Even with accounting for driving traffic with ads, there is a potential for building something that can generate income that could be funneled back to artists or photographers. I ordered one poster companyās print and was pleasantly surprised with the paper and print quality for the cost and photo resolution.
Shifting away from the new idea, I spent a lot of effort building out the actual street art tools. Last email was about the machine learning part of training a model to detect street art. This week, I focused more on building the tool to have user-generated content, and a pleasantly medium for consuming the images.
I decided to fully rebuild my original iOS app that was launched in 2014. Since launching, I hadnāt touched it, and it began collecting proverbial dust.
I had three parts that needed to be revitalized.
First, I needed a new app. Second, I needed fresh content to serve. And third, I needed a way to manage the content uploaded by users.
Regarding the app, I have been thinking about the execution of a good street art application for a while, so I knew what I wanted to do. Rather than focusing one something that manually needs managing and updating, I knew the only way I could be effective at building something was to make a self-updating, self-engaging app that used feeds of data to refresh itself to users. I also realize that the effective street art browsing method is not a regular cadence of opening the app, but rather a semi-regular summary email/notification that draws in an interested user.
I decided to use React Native and built out a four part app. The first part provides an editorially curated list of images from a larger community. Each day this create a fresh set of images that can be viewed. The second part of the app is a search based tool that lets people search for images in a specific place. The specific places are most interesting if they are your current location, but given that there are so many images being uploaded daily, so the third part of the app is a tool to view trending cities. Finally, the fourth part of the app is to allow users to upload content on their own and tag/label images.
Demo: https://youtu.be/wRWcbB3HfDY
Based on this model, I was able to get an authentication system up and running that allows users to sign up with a digital identity. This was built around a previous application I had, so I have a way to customize the experience of a user based on their browsing history and potential create tools around the user behavior. This system also allows me to have user generated content associated to an account, which is important for a variety of reasons.
For the daily update content, I took a shortcut and decided to feed images from the Reddit streetart subreddit. This group regularly uploads images at a steady cadence, so for now, this is my source of editorially curated content.
For the location search and trending locations, I was able to use my old API for street art in my 2014 app. The server that does the calculation of your current location and the nearest images to that point is still functional. The only problem is that all the old images are no longer accessible due to instagramās platform changes in the past. As a result, I needed to rebuild the dataset around this server.
To do that, I have been scraping images for the past couple months, but havenāt been able to process them accordingly to refresh the local art detecting service. To get the images ready, I needed to make a small program that checked if the scraped images had associated location data, then upload the images to my application and create a location data point to correlate to the image. This was something I kept putting off, but finally took the time to do it.
I ended up writing the image uploader and location metadata association script in a python notebook. Being that it was such an iterative process to get right, I surprisingly found it useful. This was very unexpected.
I got the first batch of 10,000 images working and have many hundreds of thousands of images to process accordingly. Fortunately for the most recent batch, I scraped the images with location data. As a result, the images were slower to download, but I only had one remaining step after I was done.
For the remaining images, I need to add a step of checking if the downloaded images have corresponding location data. This shouldnāt take too long.
I have a few more possible tasks I need to figure out. One, is my image scraper saves images to a file system. The ideal situation would be to write a program that directly scrapes images and does all the other stuff needed to get the location data, and import the images into my application. Because there is so much rate limiting around the scraper, this is harder than it sounds. As a result, I need to make some kind of daemon that monitors my filesystem for new files and manages the scraped images. This daemon would ideally check which files were already checked/uploaded and then I would be able to let the scraper keep operating as it is.
Separately, I noticed that a lot of the newer images I have been getting have less accurate location metadata. I think this is part of the privacy/security shift on the Instagram platform. Although itās not explicit, I imagine that the Instagram UI defaults to auto-populating locations that less specific when people are uploading images. As a result, I find that I will likely need to account for ways to properly associate images to their proper locations.
Lots of stuff happening and more to come!