• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Remember Lenny

Writing online

  • Portfolio
  • Email
  • Twitter
  • LinkedIn
  • Github
You are here: Home / Archives for 2014

Archives for 2014

I returned my Google Glass after 30 days

May 23, 2014 by rememberlenny

Taken with Glass

Social implications of wearing Glass

At the end of November, I was one of the thousands of people who receive the next round of invitations for Google Glass. After the v2 explorer edition was announced, I responded to an email from the Google Developers Group meetup in New York City. The email offered a Google Glass invitation code to anyone interested in acquiring the device.

Initially, I was completely sure I would want the device. I immediately responded to the email and excitedly told my roommate. Based on the timing of the invitation code request, I had to wait over the Thanksgiving holiday period before hearing a reply. During the time, I went home and spent an extended period researching the Google Glass status.

Many of the blogs online weren’t very diverse in opinion. There were the two camps of: “Its not where it could be, but I love the potential” and “This thing is way too expensive for what it can do”. Through reading state of Google Glass, I found that the battery life was poor, there was no integration with iOS (since December, this changed), the app eco-system was not extensive, and you couldn’t use it with perscription glasses. While I ocilated on my perspective toward buying Glass, I decided to go ahead with it.

Upon receiving confirmation of an invitation code, I setup my pick up date. Suprisingly, after I told my roommate, he applied for a Glass invite and was able to get a pair. After seeing his pair, I was reinvigorated to experiement with the new technology. In fact, because he had a pair, I started to imagine a number of possibilities involving multiple Glass users.

Glass in NYC

Wearing Glass in New York isnt too obscure. My roommate says it make you a “C class celebrity”. People look at you in the subway. Glass becomes a easy point to start a conversation. I even feel like you get special treatment at resturants. Regardless, it also becomes an impediment for feeling comfortable around people. Its an expensive piece of technology that draws attention.

I found that initially I was very interested in the different applications offered on Glass. The most interesting application is called Field Trips. Field Trips uses your current location to feed you relevant information about places around you. For example, Field Trips pulls in historic information about the landmarks around you. When you pass the area, you get a “card” that tells you the areas significance. There was a building in my community that I really admired, but never knew its historic significance. After walking by the building while wearing Glass, I got a “card” about the building. This blew my mind.

In most of December, Glass didnt have an iOS app. As a result, there was very limited functionality. There were no maps, very little SMS integration, and quickly drained phone battery. Even with these limitations, wearing Glass was awesome.

In mid-December, Glass updated itself overnight and introduced new features. This was impressive. The coolest was a “Wink” recognition feature that was not previously accessible. This feature allowed you to take pictures by winking with your right eye. Although the wink detection would occasionally lose calibration, the feature was very impressive.

Around the same time, the iOS application was released. This gave the Glass the ability to integrate with the iPhone. With the iOS app, Glass users could get directions without an Android device. This was very useful, but still not as impressive as I hoped. Even in the cold December weather, I found it easier to pull out my iPhone and look on Google Maps. When I needed to go to obscure addresses, it became impossible to “speak” the appropriate address.

My biggest struggle was the limited user interface on the Glass. While the device itself is best fit for voice activation, you can also use a number of gestures. I found the current interface too one-dimensional. When trying to set a Timer or procede through a list of application options, finding and selecting the prefered choice was difficult. Im sure this will be improved and no longer be an issue in the future.

Social implications of wearing Glass

The reason Im returning Glass is less to do with the technology and more to do with the social implications. I had some great use cases, such as ease of recording personal interviews, as well as some bad acusations from stragners. Overall, the response from people was more positive than anything. People wanted to know about the “cool looking glasses”. Most lay people don’t follow tech news, so they have no idea what the Glass is. I found that there was a higher population of young men who responded to Glass positively. Oppositely, I found the largest group of negative responses to come from middle-aged woman.

My biggest issue with Glass was the disruptive quality when looking at or talking with people. Glass doesn’t disrupt your field of view, but it does feel like a barrier when interacting with other people. Once talking to someone, the Glass can be easily ignored, but I always felt a sense of discomfort.

Letting other people wear glass

My greatest joy with Glass was letting other people wear it. I found that younger people were very adept and using the voice activated commands. Oppositely, people in the twentys would wear the Glass and passively wait for something to happen. Adults who I had wear the Glass were often impressed by the device, but not as interested in trying it on.

After the honeymoon phase, I found myself using the Glass as a glorified watch. It because very easy to look at the time. I can imagine that great applications would be able to provide very valuable snippets of information with ease. For the time being, the information that is being served is not personally convincing for the need to have Glass. For the time being, even the best Glass apps are accessible via mobile devices. The moment Glass will shine is when apps are exclusively avaliable for Glass.

Reason for returning

I am returning Glass because I feel guilty about how much it costs. Buying the Glass was not a financial burden. As a software developer, I can justify expensive technology purchases because if they benefit my quality of life. Even if I dont use the device frequently, I would justify the value if it was useful when needed. I found this to be the case with buying a nice moniter for use at home and a high quality light laptop.

Still, the notion that I paid $1500 for the Glass felt obscure. Between the number of people who work low-paying hourly jobs, my mother included, I felt it was ridiculous to have such an expensive luxury item. The Glass would require more than a months worth of paychecks for most people to buy. Knowing that I was wearing the thing around without having much utilitive value kept reminding me how I was wasting the financial capital invested into the piece of technology.

If the device was a third of the cost, I could begin to justify the cost. For the time being, I am embarrased to be paying two months rent to feel apart of an exclusive group. While I think its amazing, I dont feel comfortable participating.

Overall thoughts

Considering my grievances about the user experience, I know these will be worked out. I have no question that the team working on Glass (officially and unofficially) are building amazing software. The potential for having a freely accessible camera and screen is brilliant. I can imagine security guards with Glass networks. Having a network of other “eyes” that you could access at the command of a voice seems useful in a number of professional use cases. I can see great social implications where people can see digital geo-fence activated messages based on their social networks. The Glass-like technologies have only jsut begun. I will be waiting with excitement to see the continued maturation of the Glass platform and users.

Filed Under: Uncategorized

Hackhands experience

May 14, 2014 by rememberlenny

Peer-programming is appealing to programmers because they often do work alone. The opportunity to work alongside another developer and learn/share is a selling point. Services that provide the opportunity to pair through screenshare make a lot of sense.

I started working on Hackhands. Hackhands is a service that connects programers with people who have programming issues. The service bills the user who needs help and pays the ‘teacher’ a dollar a minute.

My first experience was positive. Hackhands sent me an email telling me someone needed help with a javascript issue. I logged in and immediately gain visibility on the users problem. We used a video chat with screen share. I directed the person on how to resolve the issue, while using the Hackhands interface to send code snippets. The session lasted 14 minutes and pleased the end-users.

I tried the service again yesterday and had a different experience. The person with the problem had a problem that was much larger. The person wanted to set up a parallax web interface to an existing webpage. The page had issues that couldn’t applied to a parallax layout. The problems scope was beyond a short meeting

The problems with this user raised a flag. These services are effective in their domain, but completely fail when extended beyond. The second user I connected with left displeased because his expectations were not met. He should have hired someone to do the job. He had no idea how the Javascript and CSS worked on the page.

These services are useful for users who are on the right track. Hackhands is perfect when users know something should work, but don’t know how to execute a specific part of their problem.

If you know your stuff and want to help people, check out https://hackhands.com/#/

Filed Under: Uncategorized

Present on topics you know nothing about

April 20, 2014 by rememberlenny

Talking

I spoke at HTML5 App Developers in New York on April 16th. John Paul approached me two months ago and asked if I would present. Until then, I hadn’t considered speaking at a meet-up because I didn’t have anything to share. When John Paul asked me, I said yes because I knew it would be a good opportunity. I had never spoken on a technical topic, but I had been to enough to know what to prepare.

Upon submitting my talk topic a month ago, I decided on Computer Vision (CV) and Machine Learning (ML) in JavaScript. I never studied complex math or computer science in school, so I needed to research the topic to preparing a talk.

Before picking my talk title, I did my share of Google-ing around for potential leads. I made sure Javascript and ML were existing topics. I also had read of Computer Vision applications to e-commerce filtering stores. I made sure people had written on the topic and looked for codebases I could learn from.

I collected all the resources I found in a Evernote pad. I used Evernote for the ease of sharing between devices. I have a work computer, personal laptop, tablet, and cell phone. Whenever I had time, I looked for links and added them to my document. With ever link, I added a short description of the content. I would reorganize the document in priority of usability and value.

Once I had enough resources to pick from, I began gaining a high level picture. I only Googled and collected links on first impression. After looking further into the topic, I realized I made a huge mistake. The resources I found were of ML and CV with JavaScript, but not with the browser. The resources were Node applications. They were no different from the C++/Python/MatLab/Java alternatives. This was a problem.

Talking to people and Github search saved me in the end. I also looked on HackerNews and sought out email newsletters on ML. I used Github repository search and filtered out backend languages and found JavaScript-only projects. Also through reaching out to people through their blogs and Twitter, I was able to get insight into resources to look into. Talking with people helped speed up my ability to identify the “best” resources available.

After securing resources, I focused on gaining a lower-level of how the resources functioned. I was able to read and understand the JavaScript, but I didn’t understand the principles. I found an online course held by Stanford’s CS program. The resources I found mentioned the class. The class provided a full-spectrum of the ML mechanics and familiarized me with the vocabulary.

Through reviewing the class, I realized it was impossible to grasp the topic before the presentation. There was too large of a gap between what I understood and what I needed to know. I didn’t understand linear algebra.

I spent time to understand the primary concepts, but ignored mastering them. I looked at matrix math, linear digression, and the referenced algorithmic methods. I didn’t understand the depth of the algorithms, but I understood what they did/didn’t have the capacity of doing. Through this, I was able to synthesize the concepts using analogies. Instead of presenting the terse concepts in full, I could present the analogies.

Finally, I decided how I would present. I synthesized the materials I found. I separated the materials into a presentable form. I started with the broad concepts. Then I moved to explaining low-level behavior. Finally, I planned to present examples. I ran the presentation structure by John Paul. After hearing his thoughts, I restructured the presentation to focus more on code and less on concepts.

Meet-up talks are on a technology/library the presenter made. In my case, I didnt have anything to showcase. I felt the libraries and topics were obscure enough that people would value from the exposure. After restructuring the material, I planned to make a photo-rich presentation.

The presentation itself was first written out. I wrote out one line-per-slide in a Word document. Once I had the majority of the presentation, I started looking for images to go with each line. I had an idea of what I wanted to show for each line, so this process was a matter of collecting resources. After I collected all the images and made the respective gifs, I slapped it all in a Powerpoint presentation. I printed out the text to read, so the computer would only present slides.

The preparation helped me during my actual presentation. Although I was nervous, I knew exactly what to say. I set up a timer for myself when getting on stage, so I knew if I was behind schedule. By the half-way point in my presentation, I stopped reading from the prepared text. I had already gotten in the zone and didn’t need to have a guide. The talks delivery was key, so even though I didn’t use the prepared text, it comforted me to have.

I didn’t know anything about ML or CV. I couldn’t have be happier for accepting to talk. I am confident through the preparation, I become more familiar with ML and CV than I could ever have expected to be. The research also gave me some good ideas for future projects and talks. I realized through researching that there were a lot of applications of ML and CV that I expected, but couldn’t find. The topic’s complexity is a barrier for applying the concepts to everyday problems people would enjoy.

Link to the talk

Filed Under: Uncategorized

Discovery from Napster to iTunes

April 2, 2014 by rememberlenny

I was born in a generation of the dying video rental stores. Blockbuster’s business peaked in 2004. Then it declined. The rise of RedBox, Netflix, and the On-Demand alternatives crushed the rental business. In middle school, I spent afternoons roaming the aisles of my local video stores. I could stop in once section and see all the tapes in a single series. I spent many nights binge watching all the original Star Trek, Indiana Jones, and Star Wars films. At the video store, I would find a section I liked and grab as many videos I could fit in my arms. I used genre to discover videos.

I was also born into a generation with frictionless access to music. My siblings’ generation were apart of the Walkman revolution. I was apart of the music industries transition to CDs. The rise of Virgin Music. This was the shift of the vinyl from a music listening device to a vintage item. This was the Napster generation. The rise of Kazaa. And the growth of iTunes. My ability to access music changed over a few years.

In elementary school, I couldn’t answer questions about music. My classmates would have favorite bands. They would talk about the concerts they went to with parents. I couldn’t relate. My favorite CD’s were movie soundtracks. I listened to the Matrix soundtrack on repeat for months. Not because I though it was the best music, but because it was all I knew about.

When P2P networks began to rise, I started discovering music through individual song downloads. I would search for bands I had heard of, and download any of the returned results. In middle school, I discovered House music and started downloading music based on genre. I would search in mass for keywords associated to my taste. After downloading songs, I would burn it on a CD and listen to it on the way to school. I rarely shared CDs. I felt like the music I listened to was unique to my tastes.

While driving with my friends parents, we would listen to the radio. We would listen to the Oldies channel. I rarely heard anyone who listened to the House or Techno music I liked.

Torrenting changed the way I consumed music. Instead of downloading individual songs, I began downloading discography’s of bands. I wouldn’t download the individual songs I liked, but instead I would have every song ever released by an artist. I remember going through an intense Rage Against the Machines, Wu-Tang Clan, Nirvana, and MF Doom phase. This was because I was able to get the discography and listen to every one of their songs.

By having the discography, I found it difficult to find the quality songs by a brand. P2P services acted as a indirect rating service through the number of upload/downloads happening. Torrenting services acted the same way, but the files downloaded were much larger. Through a P2P service, the trafficked files were important. Discography’s didn’t show me the best songs or albums by an artist. As a result, I started having more music than I would ever listen to. I even had bands that I didn’t like because I never heard their ‘good’ songs.

My problem wasn’t about access, it was about identifying quality. P2P networks worked because people download one song at a time. If there was high traffic on a song, it was good. With Torrents, the good songs were no longer clear. I didn’t care enough to research a band after downloading their discography. I was more interested in the feeling of having all the songs.

iTunes was a game changer. It wasn’t about making music accessible. I had all the access I needed. It wasn’t about the instant access. As a high schooler, I had all the time in the world to find how to download an album or song. I would scour IRC boards, download forums, or torrents. iTunes was a game changer for me because it reintroduced the ability to distinguish an artist’s quality songs. This was missing before.

I had a discovery problem. In video stores, I discovered content based on genre. I would go to the “editor’s picks” section if I was in need of inspiration. For music, I started out with movie soundtracks. Napster and Kazaa made music accessible to me. Before, I felt that music was a foreign world. After, I could learn from the presence of an existing community of consumers. Torrents gave me access to everything I wanted in bulk. My issue was not about access. I didn’t know how to identify the quality. iTunes reintroduced the ability to discovery quality.

When I discovered Hype Machine, I became ecstatic. Hypem surfaces the internet’s most popular songs by tracking music blogs. Hypem lets you listen to music through a online stream, without advertisements. I would listen to the Hypem most popular and if I heard something I liked, I would find the artist in iTunes and buy their songs. I discovered many artists I love through this methods.

Discovering new music is great. I love finding a new band that I can share with my friends. I also love having a variety of music playing in the background. When Im in the mood for something new, I use Hypem. This is the best part of the passive music listening process for me.

And now, Spotify is my new boat. A band I haven’t listened to in a long time will resurface in my memory. When this happens, I turn to Spotify. Spotify is my go-to music access tool. In contrast to Hypem, I use Spotify when I know exactly what I want to listen to. Today it was Spoon. Yesterday it was The Faint. Tomorrow, who knows. Regardless, having a tool to discover new music and another tool to resurface old music are crucial.

Filed Under: Uncategorized

Clear goals direct good development

March 28, 2014 by rememberlenny

Yesterday I discovered David Jackson’s blog A Founder’s Notebook on Fred Wilson’s blog. I skipped through a few posts and found Setting clear goals = empowerment. Jackson currates a wide array of content across other blogs and writes brief thoughts on the issue. I’m going to pull from his book.

From A Small World CEO Sabine Heller‘s Corner Office interview:

You have to manage people based on results and set clear goals. It sounds like a simple thing, but people don’t do that often. When I was 22 and working at UGO, it didn’t matter that I had no experience and it didn’t matter what my process was as long as I hit my goal. It taught me how empowering it is to be treated like that. I am a great manager for people who are strong thinkers and motivated. I empower people. I promote people. I give them a lot of leeway. At the end of the day, I look at results, and that’s it. I feel very strongly that organizations infantilize employees. You should treat them like adults.

At my current position as a software engineer at Conde Nast, I have been a part of two tech teams. I have worked on projects that affect platform level code. In my experience, I am most productive when project scope predetermined by the client. I can commit to a clear timeline, only if I understand the expected output. If I am given an unrefined project, there must be an predetermined time limit.

Have clear project definitions before passing on a project to engineers. If the project is not build, then run the ideas past a designer and start on a visual mock. If the visual mock passes the project developer’s taste, then build an prototype. Don’t waste time going into the production build. Figure out the issues and assumptions associated with the prototype. Once product owner vets a prototype, you can build. Someone who understands the potential conflicts of the prototype needs to provide a final review. If all is good, reevaluate a timeline and commit to it.

After the initial design/prototype is complete, an estimate should be accurate. Before design, an estimate can’t be accurate because the project scope will change. This is not the engineers fault. The person who determines the product must take full responsibility. This would be the product manager or the CEO of a company. Before the designer passes ideas onto development, refine it. Clear goals inspire effective work. Defined product expectations will also result in productive engineer time.

Filed Under: Uncategorized

Scraping web forums for image urls

March 7, 2014 by rememberlenny

Breakdown of posts by user

Goals

Im applying my graffiti interests toward my programming ability. Today I put together a brief python script to scrape a reputable graffiti writers forum. The scraper was designed to build a database of user names and images uploaded. My goal is two fold: learn about computer vision through analyzing graffiti pictures, develop insight into people’s usage of the internet for distributing graffiti images. Beyond analysis, I would love to create a graffiti recomendation engine for aspiring graffiti artists. This engine would look at a person’s ‘style’ and give suggestions of other artists they may benefit from studying.

Scraping process

I focused my first group of images scraped on New York city. I found the New York city forum thread in 12ozprophet.com. From here, I catered the scraper to tag posts and the images contained in the post. I created a unique ID for the images and the posts. Now that this has been completed, I may need to create a unique ID for the user names. As of now, this is not necessary.

The scraper pulled from over 440,000 posts within the New York graffiti thread. From those posts, over 275,000 images were gathered. In a brief overview, I can tell that not all images are graffiti. Similarly, not all images are of paintings. There are tags, throw ups, pieces, and sketches. The images vary in quality and size. The dataset pulled from forum posts between 2009 to 2014. I’m surprised to see how much camera quality and digital photography has changed in those few years.

Considering analysis

In the past few weeks, I have spent time understanding the application of statistical analysis on large data sets. Because I was working with small sets, I was able to use Microsoft Excel’s functions. Beyond 100 data points, excel becomes noticeably sluggish in processing simple PivotTable tasks. I will use R and python libraries to process future analysis.

I am reading various books I purchased in the past from Oreilly. The books topics are mainly around R, data analysis in Python, OpenCV, and machine learning. None of these topics are failiar to my background, or me so I have also picked up a basic book on linear algebra. I would like to finish the preliminary results from the first dataset by next week and move ahead with scraping the rest of 12ozprophet.

Breakdown of all posts by date

Code

The code used to scrape is pasted below. The dataset will be made available in the near future. Note, while scraping 12ozprophet, the images are hosted on a large variety of websites. Different users primarily uploaded the images, so the image analysis should not create a large load on the forums. I do not believe I need to download the images for analysis, but if I do, I will likely use some a virtual machine cloud instance.

Also, I am unfamiliar with the Python-way to write programs. I am primarily working with JavaScript and PHP. I have dove into Ruby on Rails, but am still dependent on the Gems and generators. As a result, much of my Python code is written with a JavaScript mentality.

#!/usr/bin/env python
import scraperwiki
import requests
import lxml.html    

# Constants
numberofposts = 7844
post_iteration = 0
user_iteration = 0
image_iteration = 0

# One-time use
global savedusername, date_published, post_image

savedusername = 'null'
dateremovetitle = """N   E   W      Y   O   R   K      C   I   T   Y - """
dateremovere = """Re:"""
ignoredimages = ['images/12oz/statusicon/post_old.gif','images/12oz/buttons/collapse_thead.gif','images/12oz/statusicon/post_new.gif','images/12oz/reputation/reputation_pos.gif','images/12oz/reputation/reputation_highpos.gif','images/icons/icon1.gif','images/12oz/buttons/quote.gif','clear.gif', 'images/12oz/attach/jpg.gif','images/12oz/reputation/reputation_neg.gif','images/12oz/reputation/reputation_highneg.gif','images/12oz/statusicon/post_new.gif']
for i in range(1, numberofposts):

html = requests.get("http://www.12ozprophet.com/forum/showthread.php?t=128783&page="+ str(i)).content
dom = lxml.html.fromstring(html)

print 'Page: ' + str(i)
for posts in dom.cssselect('#posts'):
    for table in posts.cssselect('table'):

        try:
            username = table.cssselect('a.bigusername')[0].text_content()

            if username != savedusername:
                if username != 'null':
                    savedusername = username
        except IndexError:
            username = 'null'

        try:
            post_iteration = post_iteration + 1 #my unique post id
            postdate = table.cssselect('td.alt1 div.smallfont')[0].text_content()
            postdate = postdate.replace(dateremovetitle, '')
            postdate = postdate.replace(dateremovere, '')
            postdate = postdate.strip()
            date_published = postdate
            # print '---'
            # print savedusername +' '+ postdate + ', ID: ' + str(iteration)
        except IndexError:
            postdate = 'null'

        for img in table.cssselect('img'):
            imagesrc = img.get('src')
            imagematch = 'false'

            for image in ignoredimages:
                if image == imagesrc:
                    imagematch = 'true'

            if imagematch != 'true':
                image_iteration = image_iteration + 1
                post_image = imagesrc            


                post = {
                    'image_id': image_iteration,
                    'image_url': post_image,
                    'post_id': post_iteration,
                    'user': savedusername,
                    'date_published': date_published,
                }
                print post

                scraperwiki.sql.save(['image_id'], post)

Filed Under: Uncategorized

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 64
  • Go to page 65
  • Go to page 66
  • Go to page 67
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Thoughts on my 33rd birthday
  • Second order effects of companies as content creators
  • Text rendering stuff most people might not know
  • Why is video editing so horrible today?
  • Making the variable fonts Figma plugin (part 1 – what is variable fonts [simple])

Archives

  • August 2022
  • February 2021
  • October 2020
  • September 2020
  • August 2020
  • December 2019
  • March 2019
  • February 2019
  • November 2018
  • October 2018
  • April 2018
  • January 2018
  • December 2017
  • October 2017
  • July 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • August 2016
  • May 2016
  • March 2016
  • November 2015
  • October 2015
  • September 2015
  • July 2015
  • June 2015
  • May 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • October 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012

Tags

  • 10 year reflection (1)
  • 100 posts (2)
  • 2013 (1)
  • academia (2)
  • Advertising (3)
  • aging (1)
  • Agriculture (1)
  • analytics (3)
  • anarchy (1)
  • anonymous (1)
  • api (1)
  • arizona (1)
  • Art (2)
  • art history (1)
  • artfound (1)
  • Artificial Intelligence (2)
  • balance (1)
  • banksy (1)
  • beacon (1)
  • Beacons (1)
  • beast mode crew (2)
  • becausewilliamshatner (1)
  • Big Data (1)
  • Birthday (1)
  • browsers (1)
  • buddhism (1)
  • bundling and unbundling (1)
  • china (1)
  • coding (1)
  • coffeeshoptalk (1)
  • colonialism (1)
  • Communication (1)
  • community development (1)
  • Computer Science (1)
  • Computer Vision (6)
  • crowdsourcing (1)
  • cyber security (1)
  • data migration (1)
  • Deep Learning (1)
  • design (1)
  • designreflection (1)
  • Developer (1)
  • Digital Humanities (2)
  • disruption theory (1)
  • Distributed Teams (1)
  • drawingwhiletalking (16)
  • education (3)
  • Email Marketing (3)
  • email newsletter (1)
  • Employee Engagement (1)
  • employment (2)
  • Engineering (1)
  • Enterprise Technology (1)
  • essay (1)
  • Ethics (1)
  • experiement (1)
  • fidgetio (38)
  • figma (2)
  • film (1)
  • film industry (1)
  • fingerpainting (8)
  • first 1000 users (1)
  • fonts (1)
  • forms of communication (1)
  • frontend framework (1)
  • fundraising (1)
  • Future Of Journalism (3)
  • future of media (1)
  • Future Of Technology (2)
  • Future Technology (1)
  • game development (2)
  • Geospatial (1)
  • ghostio (1)
  • github (2)
  • global collaboration (1)
  • god damn (1)
  • google analytics (1)
  • google docs (1)
  • Graffiti (23)
  • graffitifound (1)
  • graffpass (1)
  • growth hacking (1)
  • h1b visa (1)
  • hackathon (1)
  • hacking (1)
  • hacking reddit (2)
  • Hardware (1)
  • hiroshima (1)
  • homework (1)
  • human api (1)
  • I hate the term growth hacking (1)
  • ie6 (1)
  • ifttt (4)
  • Image Recognition (1)
  • immigration (1)
  • instagram (1)
  • Instagram Marketing (1)
  • internet media (1)
  • internet of things (1)
  • intimacy (1)
  • IoT (1)
  • iteration (1)
  • jason shen (1)
  • jobs (2)
  • jrart (1)
  • kickstart (1)
  • king robbo (1)
  • labor market (1)
  • Leonard Bogdonoff (1)
  • Literacy (1)
  • location (1)
  • Longform (2)
  • looking back (1)
  • los angeles (1)
  • Machine Learning (13)
  • MadeWithPaper (106)
  • making games (1)
  • management (1)
  • maps (2)
  • marketing (4)
  • Marketing Strategies (1)
  • Media (3)
  • medium (1)
  • mentor (1)
  • message (1)
  • mindmeld games (1)
  • Mobile (1)
  • Music (2)
  • Music Discovery (1)
  • neuroscience (2)
  • new yorker (1)
  • Newspapers (3)
  • nomad (1)
  • notfootball (2)
  • npaf (1)
  • odesk (1)
  • orbital (14)
  • orbital 2014 (14)
  • orbital class 1 (9)
  • orbitalnyc (1)
  • paf (2)
  • paid retweets (1)
  • painting (1)
  • physical web (1)
  • pitching (2)
  • popular (1)
  • post production (1)
  • Privacy (1)
  • process (1)
  • product (1)
  • Product Development (2)
  • product market fit (2)
  • Programming (6)
  • project reflection (1)
  • promotion (1)
  • prototype (17)
  • prototyping (1)
  • Public Art (1)
  • Public Speaking (1)
  • PublicArtFound (15)
  • Publishing (3)
  • Python (1)
  • quora (1)
  • Rails (1)
  • React (1)
  • React Native (1)
  • real design (1)
  • recent projects (1)
  • reddit (3)
  • redesign (1)
  • reflection (2)
  • rememberlenny (1)
  • Remote work (1)
  • replatform (1)
  • Responsive Emails (1)
  • retweet (1)
  • revenue model (1)
  • rick webb (1)
  • robert putnam (1)
  • ror (1)
  • rubyonrails (1)
  • segmenting audience (1)
  • Semanticweb (2)
  • Senior meets junior (1)
  • SGI (1)
  • Side Project (1)
  • sketching (22)
  • social capital (1)
  • social media followers (2)
  • social media manipulation (1)
  • social media marketing (1)
  • social reach (5)
  • software (3)
  • Soka Education (1)
  • Spatial Analysis (2)
  • spotify (1)
  • stanford (2)
  • Startup (21)
  • startups (7)
  • stree (1)
  • Street Art (4)
  • streetart (5)
  • stylometrics (1)
  • Technology (1)
  • thoughts (1)
  • Time as an asset in mobile development (1)
  • Towards Data Science (4)
  • TrainIdeation (42)
  • travel (1)
  • traveling (1)
  • tumblr milestone (2)
  • twitter (1)
  • twitter account (2)
  • typography (2)
  • unreal engine (1)
  • user behavior (1)
  • user experience (3)
  • user research (1)
  • user testing (1)
  • variable fonts (1)
  • video editing (2)
  • visual effects (1)
  • warishell (1)
  • Web Development (8)
  • webdec (1)
  • webdev (13)
  • windowed launch (1)
  • wordpress (1)
  • Work Culture (1)
  • workinprogress (1)
  • zoom (1)