I decided to develop familiarity with computer vision and machine learning techniques. As a web developer, I found this growing sphere exciting, but did not have any contextual experience working with these technologies. I am embarking on a two year journey to explore this field. If you havenāt read it already, you can see Part 1 here: From webdev to computer vision and geo.
ā
Iļø ended up getting myself moving by exploring any opportunity Iļø had to excite myself with learning. I wasnāt initially stuck on studying about machine learning, but I wanted to get back in the groove of being excited about a subject. Iļø kicked off my search by attending a day-long academic conference on cryptocurrencies, and by the time the afternoon sessions began, I realized machine learning and computer vision was much more interesting to me.
Getting started
Iļø kick-started my explorations right around the time a great book on the cross section of deep learning and computer vision was published. The author, Adrian Rosebrock from PyImageSearch.com, compiled a three volume masterpiece on the high level ideas and low level applications of computer vision and deep learning. While exploring deep learning, Iļø encountered numerous explanations of linear regression, Naive Bayesian applications (Iļø realize now that Iļø have heard this name pronounced so many different ways), random forest/decision tree learning, and all the other things Iām butchering.
My new book, #DeepLearning for Computer Vision with #Python, has been OFFICIALLY released! Grab your copy here: https://t.co/rQgpAflp52 pic.twitter.com/fKEyf8i2fR
— PyImageSearch (@PyImageSearch) October 17, 2017
Iļø spent a few weeks reading the book and came away feeling like Iļø could connect all the disparate blog posts I have read up to now to the the array of mathematical concepts, abstract ideas, and practical programming applications. I read through the book quickly, and came away with a better sense of how to approach the field as a whole. My biggest takeaway was coming to the conclusion that Iļø wanted to solidify my own tools and hardware for building computer vision software.
Hardware implementation
Iļø was inspired to get a Raspberry Pi and RPI camera that Iļø would be able to use to analyze streams of video. Little did I know that setting up the Raspberry Pi would take painfully long. Initially, Iļø expected to simply get up and running with a video stream and process the video on my computer. Iļø struggled with getting the Raspberry Pi operating system to work. Then, once Iļø realized what was wrong, Iļø accidentally installed the wrong image drivers and unexpectedly installed conflicting software. The process that Iļø initially thought would be filled with processing camera images, ended up becoming a multi hour debugging nightmare.
So far, Iļø have realized that this is a huge part getting started with machine learning and computer vision āstuffā is about debugging.
Step 1.Get an idea.Ā
Step 2. Start looking for the tools to do the thing.Ā
Step 3. Install the software needed.Ā
Step 4. Drown in conflicts and unexpected package version issues.
My original inspiration behind the Raspberry Pi was the idea of setting up a simple device that has a camera and GPS signal. The idea was based around thinking about how many vehicles in the future, autonomous or fleet vehicles, will need many cameras for navigation. Whether for insurance purposes or basic functionality, Iļø imagine that a ton of video footage will be created and used. In that process, there will be huge repositories of media that will go unused and become a rich data source for understanding the world.
Iļø ended up exploring the Raspberry Piās computer vision abilities, but never successfully got anything interesting working as Iād hoped. Iļø discovered that there are numerous cheaper Raspberry Pi-like devices, that had both the interconnectivity and the camera functionality in a smaller PCB board than a full size Raspberry Pi. Then Iļø realized that rather than going the hardware route, Iļø might as well have used an old iPhone and developed some software.
My brief attempt at exploring a hardware component of deep learning made me realize I should stick to software where possible. Including a new variable when the software part isnāt solved just adds to the complexity.
Open sourceĀ tools
In the first month of looking around for machine learning resources, I found many open source tools that make getting up and running very easy. I knew that there were many proprietary services provided by the FANG tech companies, but I wasnāt sure how they competed with the open source alternatives. The image recognition and OCR tools that can be used as SAAS tools from IBM, Google, Amazon, and Microsoft are very easy to use. To my surprise, there are great open source alternatives that are worth configuring to avoid unnecessary service dependence.
For example, a few years ago, I launched an iOS application to collect and share graffiti photos. I was indexing images from publicly available APIās with geotagged images, such as Instagram and Flickr. Using these sources, I used basic features, such as hashtags and location data, to distinguish if images were actually graffiti. Initially, I began pulling thousands of photos a week, and soon scaled to hundreds of thousands a month. I quickly noticed that many of the images I indexed were not graffiti and instead were images that would be destructive to the community I was trying to foster. I couldnāt prevent low-quality photos of people taking selfies or poorly tagged images that were not safe for work from loading in peopleās feeds. As a result, I decided to shut down the overall project.
Now, with the machine learning services and open source implementations for object detection and nudity detection, I can roll my own service that easily checks each of the photos that get indexed. Previously, if I paid a service to do that quality checking, I would have been racking up hundreds of dollars if not thousands of dollars in API charges. Instead, I can now download an AMI from some ādata scienceā AWS box and create my own API for checking for undesired image content. This was out of reach for me, even just two years ago.
Overview
On a high level, before undergoing this process, I felt like I theoretically understood most of the object recognition and machine learning processes. After beginning the process of connecting the dots between all the machine learning content I had been consuming, I feel like I am much more clear on what concepts I need to learn. For example, rather than just knowing that linear algebra is important for machine learning, I now understand how problems are broken into multidimensional array/matrices and are processed in mass quantities to look for patterns that are only theoretically representable. Before, I knew that there was some abstraction between features and how they were represented as numbers that could be compared across a range of evaluated items. Now I understand more clearly how dimensions, in the context of machine learning, are represented by the sheer fact that there are many factors that are directly and indirectly correlated to one another. The matrix math that the multidimensional aspects of feature detection and evaluation is still a mystery to me, but I am able to understand the high level concepts.
Concretely, the reading of Adrian Rosebrockās book gave me the insight to decode the box-line diagrams of machine learning algorithms. The breakdown of a deep learning network architecture is now somewhat understandable. I am also familiar with the datasets (MNIST, CIFAR-10, and ImageNet) that are commonly used to benchmark various image recognition models, as well as the differences between image recognition models (such as VGG-16, Inception, etc).
TimingāāāPublicĀ Funding
One reason I decided machine learning and computer vision are important to learn now is related to a concept I learned from the book: Areas with heavy government investment in research are on track to have huge innovation. Currently, there are hundreds of millions of dollars being spent on research programs in the form of grants and scholarships, in addition to the specific funding being allocated to programs for specific machine learning related projects.
In addition to government spending, publicly accessible research from private institutions seems to be growing. The forms of research that currently exist, coming out of big tech companies and public foundations, are pushing forward the entire field of machine learning. I personally have never seen the same concentration of public projects funded by private institutions in the form of publications like distill.pub and collectives like the OpenAI foundation. The work they are putting out is unmatched.
Actionable tasks
Reviewing the materials I have been reading, I realize my memory is already failing me. Iām going to do more action-oriented reading from this point forward. I have a box with GPUs to work with now, so I donāt feel any limitations around training models and working on datasets.
Most recently, I attended a great conference on Spatial Data Science, hosted by Carto. There, I became very aware of how much I donāt know in the field of spatial data science. Before the conference, I was just calling the entire field āmap location data stuffā.
Iāll continue making efforts to meet up with different people I find online with similar interests. Iāve already been able to do this with folks I find who live in New York and have written Medium posts relevant to my current search. Most recently, when exploring how to build a GPU box, I was able to meet a fellow machine learning explorer for breakfast.
By the middle of January, Iād like to be familiar with technical frameworks for training a model around graffiti images. I think at the very least, I want to have a set of images to work with, labels to associate the images to, and a process for cross-checking an unindexed image against the trained labels.
Thanks to Jihii Jolly for correcting my grammar.