Coding in Public

Recently, I’ve been live-streaming development sessions of Night in the Woods. I’m really enjoying it, and I thought I’d write up some notes on how I’ve done it, and give some tips I’ve picked up on how to get the most out of it.

Why Should You Code In Public?

There’s a few reasons why I’ve been streaming my code. The field that I work in, independent game development, can be a pretty personality-oriented area. Because of this, it’s often important to develop the 😎 personal brand 😎. Videos are great at this, because it’s an opportunity to have your face and voice attached to the cool things you’re working on.

Streaming your code is also an excellent way to stay very, very focused on a single task. If you’re coding as part of a performance – and live streaming is very much a performance – you’re a lot less likely to get distracted and look at the internet for four hours.

Finally, having an audience of people looking at your code means you can do something I like to think of as multicore pair programming: you often get great feedback and advice from people watching you code. I’ve solved a number of bugs thanks to input from people who are watching me work.

Where Should You Stream?

There’s a number of different options for streaming sites. The best-known sites for the kind of streaming that I do are:

  • Twitch: Very games focused, and a very large population. (I do my streams here.)
  • MixerMicrosoft’s streaming site. Also games focused, but a smaller population; designed for very low latency.
  • YouTube LiveGeneral video focused, and seems to be more designed for ‘event’-style broadcasts.

I use Twitch, largely because I work in games, so I piggy-back on the existing topic interest. It’s also very well supported by the various streaming tools and services, and brand recognition is high – if someone describes themselves as a streamer, it’s likely that they stream on Twitch.

How Do You Stream?

You don’t need a huge amount of software to stream; at minimum, you just need something that can upload a stream to your platform. The software that I use is OBS, which is a very nice (and very free) package that:

  • Captures your display and webcam
  • Composes it into a scene
  • Compresses and uploads the stream to your platform.

As far as gear goes, you also don’t need much. It’s very tempting to assume that you need lots of expensive equipment in order to be professional, but you really don’t – at minimum, all you need is your computer, and an internet connection.

If you have a webcam, that’s great! If you have a good microphone, that’s also great! But you don’t need it, and I want to be clear that you should pointedly ignore anyone trying to convince you that you do.

When I stream from my office, I happen to use a decent headset mic, so that I don’t have to think about it as much, plus a USB audio interface that lets me connect it to my computer. When I’m feeling ~fancy~, I connect a camera via an HDMI-USB interface, so that I can show my phone. That’s really it!

Because the content that I stream doesn’t have its own soundtrack, I play music while I work. This is for two reason: it shows off my frankly exquisite taste, and also means that there’s no dead air when I’m not speaking.

However, when you’re doing broadcast work, you can’t just stream your music library – you don’t have the license for it, your videos will get muted, and you run the risk of your account being banned.

Instead, stream music that is licensed for broadcast. I happen to play music that I’ve received direct permission from the composer to play (such as Alec Holowka’s superb soundtrack to Night in the Woods), or Pretzel, a streaming service that plays rather good licensed-for-broadcast music.

Where To Learn More

This post doesn’t exist without Suz Hinton’s write-up of her live coding setup. It’s got specific advice on setup, performance, and management of live coding, and was instrumental in getting me started. Go read it!

I hope this has gotten you interested in this, and if you start streaming yourself, I’d be delighted if you let me know!

Installing Unity ML-Agents

⚠️ At OSCON, attending our tutorial? 🔗 Also open the docs!

Want to explore the Unity Machine Learning Agents Toolkit (“ML-Agents”)? Here’s the easiest way to get up and running on Windows or macOS.

Unity ML-Agents is a great way to explore machine learning, whether you’re interested in building AI for games, or simulating an environment to solve a broader ML problem, why not try Unity’s ML-Agents?

We’ll be posting a variety of guides and material covering various aspects of Unity’s ML-Agents, but we thought we’d start with an installation guide!


Interested in a quick introduction to Unity and ML-Agents? Check out the video of the talk we delivered at The AI Conference in New York City!


To use ML-Agents, you’ll need to install three things:

  1. Unity
  2. Python and ML-Agents (and associated environment and support)
  3. The ML-Agents Unity project

Unity

Installing Unity is the easiest bit. We recommend downloading and using the official Unity Hub to manage your installs of Unity.

➡️ Download the Unity Hub for Windows or macOS

The Unity Hub allows you to manage multiple installs of different versions of Unity, and lets you select which version of Unity you open and create projects with.

If you don’t want to use the Unity Hub, you can download different versions of Unity for your platform manually:

➡️ Download a specific version of Unity for Windows or macOS

We strongly recommend that you use the Unity Hub to manage your Unity installs, as it’s the easiest way to stick to a specific version of Windows, and manage your installs. It really makes things easier.

If you like using command line tools, you can also try the U3d tool to download and manage Unity install’s from the terminal.

Python and ML-Agents

Our preferred way of installing and managing Python, particularly for machine learning tasks, is to use the Anaconda Environment.

⚠️ Anaconda’s environments don’t quite work like virtualenv, or other Python environment systems that you might be familiar with. They don’t store things in the location you specify, they store things in the system-wide Anaconda directory (e.g. on macOS in “/Users/USER/anaconda3/envs/”). Just remember that once you activate them, all commands are inside the environment.

Anaconda bundles a package manager, an environment manager, and a variety of other tools that make using and managing Python environments easier.

➡️ Download the Anaconda installer for Windows or macOS

Once you’ve installed Anaconda, following these instructions to make an Anaconda Environment to use with Unity ML-Agents.

➡️ First, download 🔗 this yaml file, and execute the following command (pointing to the yaml file you just downloaded):

conda env create -f /path/to/unity_ml.yaml

➡️ Once the new Anaconda Environment (named UnityML) has been created, activate it using the following command in your terminal:

conda activate UnityML

The yaml file we provided specifies all the Python packages, from both Anaconda’s package manager, as well pip, the Python package manager, that you need to make an environment that will work with ML-Agents.

Doing it manually

You can also do this manually (instead of asking Anaconda to create an environment based on our environment file).

⚠️ You do not need to do this if you created the environment with the yaml file, as above. If you did that go straight to “Testing the environment”, below.

➡️ Create a new Anaconda Environment named UnityML and running Python 3.6 (the version of Python you need to be running to work with TensorFlow at the moment):

conda create -n UnityML python=3.6

➡️ Activate the Conda environment:

conda activate UnityML

➡️ Install TensorFlow 1.7.1 (the version of TensorFlow you need to be running to work with ML-Agents):

pip install tensorflow==1.7.1

➡️ Once TensorFlow is installed, installing the Unity ML-Agents:

pip install mlagents

Testing the environment

➡️ To check everything is installed properly, run the following command:

mlagents-learn --help

You should see something that looks like the following image. This shows that everything is installed properly.

The ML-Agents Unity Project

The best way to start exploring ML-Agents is to use their provided Unity project. To get it, you’ll need a copy of the Unity ML-Agents repository.

➡️ Clone the Unity ML-Agents repository to your system (see the note below if you’re coming to our OSCON tutorial!):

git clone https://github.com/Unity-Technologies/ml-agents.git

⚠️ If you’re coming to our OSCON session, please clone this repository instead: https://github.com/thesecretlab/OSCON-2019-Unity-ML-Agents

You should now have a directory called ml-agents. This directory contains the source code for ML-Agents, a whole of lot useful configuration files, as well starting point Unity projects for you to use.

➡️ You’re ready to go! If you’re coming to our OSCON tutorial, you’ll need a slightly different project which we’ll help you out with on the day!

We’ll have another article on getting started (now that you’ve got it installed) next week!


In Portland? At OSCON?
Attend our OSCON 2019 session on 15 July 2019 to learn more!

📚 Unity Game Development Cookbook (1st Edition)

Our latest book covers everything you need to know about building games with Unity.

The book is available online, and in good bookstores. It was originally released in April 2019, and we consider it to still be current.

We really hope that you enjoy it! Please contact us if you have any questions or need a little help. We’ll try our best to get back to you.

It might take a few days sometimes, but if you get stuck get in touch. We also try and keep the code that’s available up to date.

Buy the book

Download code

You can download the resources for the Unity Game Development Cookbook (1st Edition) from GitHub: