Notation Capital

Today Nick Chirls and Alex Lines announced that I will be lending a hand at Notation Capital as a part-time partner. Electric Objects is more than a full time job at this point, but I couldn’t pass up the opportunity to give back to the NYC technology community, and contribute in some small way to what Alex and Nick are building.

I got to know Alex in the summer of 2012, when he and I were on the 7-person team tasked with tearing down and rebuilding — a site with a few million monthly visits at the time — in six weeks. There are a few blog posts worth of stories from that sprint, but I’m going to let this photo of Alex’s keyboard the night before launch speak for it.

Nick and I met when he started at betaworks in 2011. We became fast friends, spending many long nights discussing, among other things, what some interesting alternatives might be to the way companies are funded and built. He’s one of my closest friends, one of those rare sorts in life that you just trust implicitly.

When I was just beginning to think about Electric Objects, I turned to Nick and Alex for support. While still at betaworks, Nick helped put together the first round of investment for Electric Objects. It was the end of 2013, and it was a weird moment for me and my almost-company. I was feeling the pull to work on it full time, but I couldn’t afford to fund development myself.

At the same time, I wasn’t ready for what had become the standard seed round of capital: an investment of $750k – $1.5mm, valuing the company north of $3mm. The company and product didn’t deserve it yet, and I wasn’t ready to shoulder that kind of responsibility, or make that kind of commitment, for a totally untested product and market.

With that in mind, John Borthwick and Nick put together a relatively small $200,000 round of investment to help me get the business off the ground, bring in a couple of freelancers, and begin to test my hypotheses. It was a lot of money, and I was so excited to get started, to work on this thing full time, to set off on my own.

And then Nick sat me down, looked me in the eye, and said: “Are you sure you want to do this? Pause for a moment. Think about what this money means, and where it will lead you. Think deeply about the project. Is this really how you want to spend the next 10 years? Does it mean that much to you? Weigh this heavily and sleep on it before you make a decision.”

That moment was the beginning of a journey for me, and for Electric Objects. But it also demonstrates what makes Nick and Alex so special: the connection to reality, the willingness to speak plain, the honesty and transparency that is so missing from the world of venture capital — a world filled with myth and illusion, with big personalities and tiny inescapable, insidious, and unspoken incentives.

I went for it, but I went for it with open eyes and a clear understanding of what I was signing up for.

The first thing I did after incorporating Electric Objects was make Alex and Nick my advisors. Alex was instrumental in helping me build my early engineering team, and Nick continues to be a critical sounding board for fundraising strategy.

In some ways I think Electric Objects helps validate the Notation Capital hypothesis. We benefited tremendously from the time and space afforded by a relatively small amount of capital early on, before committing to and hitting the ground running with a large seed round, and we benefited tremendously from the guidance and advice that Alex and Nick brought to the table.

It’s lonely to start a company without co-founders. Alex and Nick were two of a small handful of people that gave me the confidence and energy to keep going. They both contributed greatly to the success of the company, and helped grow it from a tiny seedling of an idea to an organization worthy of the mission we set out to fulfill.

I’m sure the learning curve will be steep, but I hope I can be helpful in some small way to the current and future Notation portfolio. I’m thrilled to join a tremendous group of other part-time partners, and I couldn’t be happier to do what I can to further the Notation Capital mission.

On Ephemerality and Permanence

From Bret Victor’s recent post, on the ephemerality and permanence of the web:

“Think about speech, letters, newspapers, books, smoke signals… Each medium serves only a particular subset of social purposes, and each medium is technically transparent enough that people can understand what’s happening when they use it.”

Bret misses the point here. The web isn’t analogous to letters, newspapers or books. It’s closer to paper, on which letters, newspapers or books can be printed. Like paper (letters for private ephemeral communication, books for the storage and distribution of knowledge), the web can support many different kinds of storage and communication systems. The preservationist complains about its rate of deterioration while the criminal regrets the permanence of ink.

One doesn’t avoid this conflict by designing a more purposeful technology. It’s deeply human to be conflicted about information. We’re overwhelmed by it but we constantly seek more. We use information technologies in ways that surprise their creators, and other users. The misalignment of expectations around privacy is as old as the concept of privacy.

What changes is the pace of progress. The pace of change slows, and we humans have enough time to develop norms that help us deal with some of this nasty conflict.

Her and The Machine

I just returned from seeing Her, the new Spike Jonze movie, with a few friends this afternoon. Joaquin Phoenix was incredible, it was beautifully shot, and the music was excellent. And it set me to thinking. (Note: some spoilers below.)

In “Future Of Technology” conversations, it is far too easy to agitate on either extreme of the spectrum, to see technology as an unwavering force for good in our world (Evgeny Morozov’s solutionism), or to see it as the spawn of ignorant genius, forever working to divide us from the very things that make us human. The film walked delicately between the two poles, offering, as Fred Wilson suggested, more questions than mockery.

Theodore, played by Phoenix, works for a company that writes, prints, and ships “beautiful handwritten letters” on behalf of customers to their loved ones. The movie opens on Theodore dictating a love letter in the voice of an old woman, addressing her husband of 50 years. We are at once moved by his words, and repelled by the apparent fraud being perpetrated on this relationship. The sender and recipient are complicit in an act of forgetting, focusing their attention instead on Theodore’s output, his poetry.

The audience is left with two discomforting notions. First, it’s the content that matters, not the thought that counts; and second, that human feelings are predictable enough to be articulated by a complete stranger. Of course, it doesn’t take much to see the leap from predictability to programmability.

It is that very leap that the primary plot line of the film explores. Theodore falls in love with a AI supercomputer capable of speaking human, learning, and eventually growing into its own “feelings.”

The central question of the film is this: can language ever truly articulate and convey our emotions? And if so, can’t we program that language into a machine? As Theodore and his friend Amy struggle and ultimately find peace with their failed relationships, the film confirms what we all know to be true: that we don’t choose what we feel, nor should we expect to ever fully understand those feelings, or put them into words.

But it also leaves us with a somewhat startling apparent contradiction: that a supercomputer could in fact comprehend those feelings, if it could only process enough information. In the words of the AI supercomputer, doing so is like having “twelve conversations at once.” Even so, just because a computer can understand emotions, the supercomputer explains that it is still limited in its ability to express them back to us. This is not due to the computer’s own limitations, but to the limitations of our human capacity to hear and comprehend.

I didn’t mean to write so much about the film, but it articulates a fascinating paradox in theories of artificial intelligence, and Future of Technology solutionism more generally. It comes down to what computers are good at.

A computer’s ability to accomplish a task quickly and cheaply depends upon a human programmer’s ability to write procedures or rules that direct the machine to take the correct steps at each contingency. Computers excel at “routine” tasks: organizing, storing, retrieving and manipulating information, or executing exactly defined physical movements in production processes. How Technology Wrecks the Middle Class

When a computer speaks, it is a human that is speaking, encoded in 0′s and 1′s, locked into a logic of predetermined use-cases. The notion that computers can have agency is silly. There is no us versus them. There is only us.

What’s fascinating to me is what happens when you connect computers. No, not computers, the people behind computers. While computers don’t have agency, people and networks of people, certainly do, and the Internet makes network formation and communication easier and cheaper than ever.

Lowering the barriers for human expression gives more people, and more groups of people, more diverse opportunities for communication. Perhaps this diversity unlocks a new range and capacity for human language. All of a sudden, it appears that we can indeed have “twelve conversations at once.”

Perhaps new tools that lower the barriers for network formation and personal expression are making expressing a complicated feeling a tiny bit easier? Perhaps the only thing wrong with this film’s vision for artificial intelligence is the presumption that when it comes, it will be artificial.

“There is only a perspective seeing, only a perspective “knowing”; and the more affects we allow to speak about one thing, the more eyes, different eyes, we can use to observe one thing, the more complete will our “concept” of this thing, our “objectivity”, be.” Nietzsche, Genealogy of Morals, Third Essay

Remember the Frontier

Cyberspace, in its present condition, has a lot in common with the 19th Century West. It is vast, unmapped, culturally and legally ambiguous, verbally terse (unless you happen to be a court stenographer), hard to get around in, and up for grabs. Large institutions already claim to own the place, but most of the actual natives are solitary and independent, sometimes to the point of sociopathy. It is, of course, a perfect breeding ground for both outlaws and new ideas about liberty. Crime and Puzzlement, John Perry Barlow, 1990

I hear it again and again: “I’m done with software. There’s nothing left to build. No one’s working on anything interesting.” There’s little I dislike more than this sentiment, but I get it. I’m feeling slightly jaded myself. Sometimes I look around and I can’t find the frontier.

So let’s remember a few things.

There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. The Computer for the 21st Century, Mark Weiser, 1991

Computers used to be new, and new things require metaphors to help us understand their purpose. At some point, we began to take those metaphors for granted. Let’s not forget that our screens and computers and the software that runs on them were designed by humans, that they don’t have to be tomorrow what they were yesterday, or what they are today.

Today when we think of computers and screens we feel anxiety bubbling up within us. We feel like we’re drowning. There’s so much to consume! How are we ever going to consume it all?

Don’t blame computers, blame design and blame history. Not bad design, just design for a specific purpose — generally speaking, we makers have kept ourselves busy making a home for traditional forms of content distribution, meaning: broadcast publishing. In a manic rush we’ve spent the last decade gathering up as much user attention as we could. Let no rock go unturned, no inconsequential feature unexplored. Everything needs to be bigger, faster, more, more, more.

We took a medium designed for human connection and created the most valuable broadcast publishing platform of all time. That’s where the money is, after all. No matter how sophisticated our ad technology, advertisers pay for eyeballs, and so we build websites and apps to aggregate as many eyeballs as possible under one corporate roof. We make design decisions that prioritize engagement and network size above all else.

It’s strange if you think about it: communities and conversations lose much of their definition and sense of identity as they grow in size. There are serious UX tradeoffs that come from growth. But more is better, old habits die hard, and advertising is a nasty addiction.

The reverse chronological stream of “the social web” made sense to us because it embedded content within conversation. It gave us “content objects,” around which we could engage with people whom we liked and admired. But now the very companies that made the stream ubiquitous are retrofitting it to accommodate a different style of conversation: broadcast. Why? Because that’s what advertising understands.

What happened to our frontier?

We used to have a map of a frontier that could be anything. The web isn’t young anymore, though. It’s settled. It’s been prospected and picked through. Increasingly, it feels like we decided to pave the wilderness, turn it into a suburb, and build a mall. And I hate this map of the web, because it only describes a fraction of what it is and what’s possible. We’ve taken an opportunity for connection and distorted it to commodify attention. That’s one of the sleaziest things you can do. What Screens Want, Frank Chimero, 2013

I don’t think you can understand the computer and the Internet without understanding the cultural context in which they were designed. They remain the most empowering technologies in history, but embedded within them is a historical paradox. Born in World War II academic and government research labs, these technologies emerged by the grace of major corporations and the State, but on the back of engineers and artists whose spirits were infused with counterculture.

The counterculture raised their voices and arms against a world in which faceless corporations and militarized states filled their coffers and expanded their powers at the expense of their customers and citizens, yet it was those very institutions that made the computer and the Internet possible.

That paradox stays with us to this day. The Internet is a place for humans to connect. What is at stake today is the style of that connection.

It can be a place for personal expression, a place that prioritizes creativity, a generative place; or it can be a place that looks a lot like the places we’ve had before: the media places of the 20th century, defined by hierarchy, uniformity, isolation, anxiety, and fear. Today it is both.

Resolving the paradox

Let’s not forget that the Internet belongs to us and that the companies who track us, shout at us, buy and sell us — they are our guests, staying so long as it pleases us. Let’s not forget that computers are a lot bigger than our laptops and our phones, that the Internet is a lot bigger than the web, that the web is a lot bigger than Facebook and Twitter, and that most people live outside of New York City and San Francisco.

We’re today surrounded by large public companies who care first and foremost about increasing the wealth of their shareholders. The requirements of those shareholders are only sometimes in alignment with the requirements of their users. Let’s not assume that these companies will be the source of the next great technological breakthrough. In fact, it’s likely that they won’t be.

New and brilliant and transformative and beautiful things don’t come from shareholders. They don’t come from research analysts. They don’t come from pundits or the press. They come from the users who are builders, who come together in order to create something meaningful, lasting, and delightful.

Just because computers today fill us with anxiety doesn’t mean they have to tomorrow. We are only just beginning to shake 150 years of media habits and business models, both of which have for some time been dominated by broadcast. We are only just beginning to imagine what “consuming media” looks like in a networked, mobile, always on, always connected society — those characteristics are, after all, only a few years old. It might not look like today’s consumption at all.

We are lucky enough to be born at a moment of dramatic transformation, and it’s at moments of transformation when old rules are broken and new rules are made. The Internet isn’t over, it’s just getting started. We have a responsibility — to the past and to the future — to buck the trend, to challenge tradition, to guess and second guess, to make new rules in the spirit of openness and invention.

Ready or not, computers are coming to the people. That’s good news, maybe the best since psychedelics. Spacewar, Stewart Brand, 1972

Let’s not forget that the Internet can be weird and unpredictable.

How To Change The World

I recently read George Packer’s, Change The World, which is something of an excerpt from his recently published book, The Unwinding, and now I am moved to say something about it.

Unfortunately, you’re probably thinking now that my post could go one of two ways:

Option 1) Defend Silicon Valley, point to the sweeping impact its innovations have had on our society, point to the oppressive governments and the stagnant industries that it has disrupted. This is a worthwhile line argument, by the way.

Option 2) Decry its excess and vanity, call on entrepreneurs and investors to step up their game in order to create a better tomorrow. Ditto.

In fact, I’m going to try something different. I’m inspired by the nuance of Packer’s delivery as much as by its content. The article is about noticing, it’s about calling attention to a culture, and drawing a connection to the industries with which that culture coexists. While Packer’s skills, experience, and hard work mean that he can tackle such a project for an entire section of our country, I’m going to lower the bar a bit — I’d like to try and articulate something I’ve noticed about myself.

I’m going to quote Packer’s article at length here because these words are both relevant to my confession and beautifully written:

“A few years ago, when Barack Obama visited one Silicon Valley campus, an employee of the company told a colleague that he wasn’t going to take time from his work to go hear the President’s remarks, explaining, “I’m making more of a difference than anybody in government could possibly make.” …At places like Facebook, it was felt that making the world a more open and connected place could do far more good than working on any charitable cause. Two of the key words in industry jargon are “impactful” and “scalable”—rapid growth and human progress are seen as virtually indistinguishable.

“When financiers say that they’re doing God’s work by providing cheap credit, and oilmen claim to be patriots who are making the country energy-independent, no one takes them too seriously—it’s a given that their motivation is profit. But when technology entrepreneurs describe their lofty goals there’s no smirk or wink.

“Technology can be an answer to incompetence and inefficiency. But it has little to say about larger issues of justice and fairness, unless you think that political problems are bugs that can be fixed by engineering rather than fundamental conflicts of interest and value.”

I, for one, am guilty of pretty much every arrogance called out above. I’ve said these things. Pretty much to the word. It sounded something like…

These issues of the moment — these “causes” — are for another person. They’re for my peers at law firms, in government, in journalism. My issues are the issues of the future. Don’t talk to me about things happening this year — the system is so screwed up that I couldn’t possibly have an impact anyway — I’m focused on building the things that will dictate the next twenty! I don’t have time to change the world in your way, full of inefficiencies and tradition, because I’m busy changing the world in my own way.

Yeah yeah, I know these issues are important, but I don’t have time to read the volume of articles that would make me sufficiently intelligent about the subjects such that I could talk about them in public (and make it clear to all listeners that I am an expert). If it comes up at dinner, I’ll just dismiss them as unimportant and return the conversation to the future of connected devices.

Yeah yeah, I realize that I should know more about what’s going on around me. I know I should do more about what’s going on around me. But wouldn’t you, society, rather I spend my days and nights obsessing about the future of news discovery? Wouldn’t you rather I take my passion and energy and point it in just one direction? Isn’t impact correlated with focus?

That’s what we want anyway, to make an impact. We want to make a difference, in the most literal sense. We want to change things and then turn around and say, “look, I changed that.”

For the better?

Honestly, isn’t that a secondary concern?

I’m beginning to see that these were just excuses, born out of laziness, and forged in the fiery pits of self-deception. I’m beginning to realize that “for the better” is not about the changes you can make with your product. “For the better” is not about how many people you can employ. It’s not about health benefits or distributing your stock options more equitably. It’s not about making your investors rich. It’s not just about sending your children to school.

It’s about recognizing that you are a participant in a series of — to borrow words from the internet — overlapping networks. It’s about recognizing what you’ve gained from being a part of those networks. It’s about recognizing that the only way to improve them, is to improve them today.

Changes don’t happen in the future, they happen in the present. They happen when the smartest, most passionate people remove their blinders and join the moment. Myopia isn’t the same as focus.

So I’m going to make an effort to be a little bit more aware. And because every statement that renounces self-interest must, in the end, point out that renouncing self-interest is actually the most self-interested thing you can do: I have a hunch that being more culturally and politically aware will help me build better products, but I’m certain that it will make me a happier person.

(Thanks to Eric Lach, Josh Stylman, Nick Chirls and Alex Rosen for helping me think through these ideas.)

Nobody Knows That I Use These Apps

More and more, I’m finding myself using mobile apps without ever opening them. I use apps like Moves and Dark Sky every day, but the app developers don’t know it.

Dark Sky sends me a push notification with a custom sound whenever it’s about to rain. I’ve become so accustomed to it that I no longer need to even take it out of my pocket when I receive an alert. The custom sound is enough.

Moves sends me a push notification every morning, telling me how many steps I took on the prior day. I don’t really need to open the app to learn more. A quick glance at the notification tells me what I need to know, and I am free to continue on with whatever I’m doing.

Moves and Dark Sky have no idea that they are providing value to me, since I typically do not take any action on the notification that is visible to the developer or even the OS.

The business impact is that companies are evaluated and funded on the basis of metrics like Daily Active Use, Monthly Active Use, Impressions, Visits. Notifications happen prior to all of these metrics.

The product impact is that if the goal is to deliver value in the notification itself, then these apps have no way of knowing whether or not their notifications are successful.

My hunch is that, for these two reasons, push notifications are an under-explored interface. Imagine an entire suite of apps with which you interact without ever opening. Imagine if app developers could send more data (images, videos) through push notifications, or even receive simple responses (“Yes” / “No”) from users without requiring users to launch the application itself.

Our phones and the apps within them are with us at all times — they are starting to feel more and more like extensions of the brain, augmenting its inputs with sensors that don’t come pre-installed in humans.

I’d love to see some forward progress in notification interfaces from the major mobile operating system. That’s the type of change that could unleash a massive wave of innovation in app development.

Don’t Learn How To Code, Learn How To Make Things

There’s a lot of chatter and hype around normals learning how to code. I’m fully in support of the hype because I (like many others) believe that understanding how machines work is an increasingly important skill in a world where human relations are increasingly dependent on networked applications.

As a result of that hype, if you sit down with an MBA interested in technology — with due apologies to MBAs for using them as the ultimate barometric gauge of hype — they will tell you that they are learning how to code. The typical evidence provided at this point in the conversation is an account at Codecademy (and all credit to the awesome team at Codecademy for this being the case).

Here’s how it goes: Before you even get started you’ve decided that you don’t want to be an engineer. You convince yourself that you provide enough value as “the business gal/guy,” and that you just need to know enough to call bullshit on the engineers. You, after all, know how to raise capital. You sign up for Codecademy. You spend 3 months deciding between Python and Ruby, because you heard Django was more powerful or something but Rails had better community support or something. You in fact have no idea what that means. You maybe do a tutorial or two. Oh wait, I should be learning Node.js. It’s the future. Then… hey what’s that shiny thing over there?

I’m allowing myself to move into snark:overdrive because I’m being self-deprecating. Yes, I, like many other business dudes and dudettes before me, have fallen into the abyss of half-assery.

Here’s how you manage to crawl out of it. Stop trying to learn how to code. Stop it right this instant. Doesn’t that feel nice? You didn’t really like it anyway, did you? Because it’s not really that fun. Syntax errors, terminal commands, servers, consoles, frameworks, libraries, gems, classes, models, views, controllers, fml.

You know what is fun? Making things. Turning a spark of creative insight into a thing that you can show people — a thing that people can use and from which they can derive some iota of pleasure or utility. Start with a simple website. Basic HTML and CSS. No product is too small. In fact, the opposite is true. If you don’t know how to build the first version of your product in a weekend — a usable working version, don’t try to build it. Programming is a means to an end, not an end in itself. You should be trying to do as little of it as possible to make the thing that you want.

Use tutorials like this one for Rails and this one for Python to introduce you to new concepts (and read this post while you’re at it), but as soon as it starts to feel like work, stop what you’re doing and use that newly-gleaned knowledge to build something cool.

Here are some other tools that I recommend:

As you build, you will actually begin to find that programming can in fact be fun! You’ll struggle for hours to solve a problem and literally clap out loud when you find a solution. Then when you realize that that single solution enables a whole set of user-facing features, you’ll pee yourself a little with joy (or because you’re so engrossed that you just couldn’t bring yourself to go to the bathroom. Go to the bathroom guys, that’s gross). The moment I discovered caching? zomg. The moment I discovered, while building an application, what MVC actually meant? Fuggetaboutit.

Here are some things I built in the last few months:
A personalized vinyl store based on your data from Rdio and

See what your friends are liking on Tumblr

We asked 20 people in 20 days about the last great thing they saw on the Internet (made w/ @jvanslem).

A word cloud generator for your most frequently tweeted words (made w/ @alexmr).

What the name says…

No description needed.
i.e. the alternative title for this post.

This isn’t rocket science. The only thing getting in the way is your commitment to programming as an end in itself, and your ambitions to build the next great social network for pets or nothing at all. Start small, make things, and then when you’re done, make some more things.

The Broadcast-ification of Social Media

I originally wrote this post for the Harvard University Nieman Lab Predictions for Journalism 2013

There is an inherent tension in social software between content discovery and the quality of conversation around that content. Group conversations get worse as groups grow, and groups grow as group discovery improves — if it’s easier to find something, more people will find it. Therefore, the easier time I have finding good conversations, the less likely those conversations are to be any good (e.g. Reddit front page vs. Subreddits). Paradoxes should be named, so let me know if you have any good ideas.

Let’s look at Twitter through this lens. Twitter began as a space for conversation — a messaging platform. It exhibited characteristics of a “many-to-many” network. Anyone could publish, anyone could follow anyone else, and discovery in this context meant discovering people to follow, not content to consume.

Read More »

Producing Media in Volume

An awesome Branch conversation today with Nina Khosla, Eric Lach, Rob Greco, and Max Fenton!

Read More »

How Technology is Making Us Stupid and Destroying Everything Good

Screen Shot 2012-05-20 at 1.47.42 PM

Yawn. Yawn. Yawn.

I’m so tired of these awful headlines.

“Is Facebook Making Us Lonely?”

“Does Facebook Turn People Into Narcissists?”

“Is technology x making us negative human characteristic y?”

Must we play into the fear and self-doubt of those who feel lost in today’s primary medium of interaction? Must we rely on a false nostalgia to defend against the scary new and unknown? My god, were you guys even paying attention during Midnight in Paris???

But worry not, ye demagogues of Internet-phobia, you’re in good company.

Read More »