In this course, students will learn how to use JavaScript and web-based technologies to create algorithmic musical compositions and experimental web-based instruments.
These are two separate assignments you'll be submitting on canvas. For the first assignment, you will design and develop your own web-based musical instrument. The instrument should allow users to interact with it and generate sounds in a meaningful way. This should not be a web-based version of an existing instrument (like a guitar or piano) but rather something more experimental which embraces the creative possibilities of the Web.
The second assignment will be to create an algorithmic system that generates musical compositions. This should not be a digital/coded version of a pre-existing song or melody and ideally not something that could be easily written as a classical score, rather it should be a composition that leverages the generative potential of the Web. That all might sound a bit vague at this point, but we’ll be spending all quarter digging into the details, starting with how we can generate sounds from code.
Through the use of the WebAudio API and JavaScript libraries like Tone.js, students will learn how to programmatically generate and manipulate sound, creating interactive and generative audio works that can be shared online.
This will usually start with some example code I share on this site. Like the example below, this code will be interactive. Press the "play note" button to hear a sound, then change the number on line 2 below and press the "run code" button above the code. Then click the button again to hear the change in frequency.
← Press this button to re-run the code, this will take any changes you made and re-render the result to the right of the editor.
← Press this button to copy the full code (including any hidden code) to your clipboard. If you have an idea or a question you want to "discuss" (ie process) with an AI (like Claude, ChatGPT or PhoenixAI) it helps to include the code in your "question" (ie input)
← Press this button to open the full code netnet. This website is great for quick tests, but if you want to keep experimenting with an example you should work on it elsewhere. On netnet you can learn more about the code and spot errors, you can also "save" your progress by pressing and choosing to either download it, create a project (when u're logged into a GitHub account on netnet) or generate a quick URL you can use to share a sketch with me via email.
← Press this button to download the full code to your computer. While those of us newer to coding might prefer to work in netnet, others might already have a preferred code editor, programming environment and workflow. Those student can use this button to download the code in these examples to use as starting points for their own projects. Refer to the assignments page for more info on submitting assignments.
Along the way, the class will also survey works by artists working in this field and (may also) feature a visiting artist who will walk students through their own practice. Themes of generative art, randomness and chance, originality and machine creativity, and the cultural implications of influential musical algorithms will also be explored.
In addition to the class lectures and accompnaying notes (on this site), throughout the quarter I’ll be recommending readings and videos to supplement discussions we’ll have in class, as well as starting points for sketches (code experiments like the one above) for you to keep practicing and expanding on what we learn in class. This will all form the necessary background and foundation for the two assignments mentioned above.
This class is an intermediate level programming course. A beginner to intermediate level understanding of core programming concepts (ideally in JavaScript) is required. While a background in music can certainly be beneficial, it is not required for success in this course.
Ultimately, this means you need to understand what code is, a special language that can be used to get the computer to do all the myriad things a computer can do. Like writing a recipe or casting a spell, the code we write are instructions for the “magic” we want the computer to generate. In our case, we’ll be writing in JavaScript, the defacto programming language of the Internet. If you’re familiar with other programming languages (like C++, Python, Java, etc) then you’re likely already familiar with all the core concepts and will just need to get used to JavaScript’s syntax and eccentricities. We’ll be reviewing these in Ch 1, but for a much deeper dive I highly recommend the book Eloquent Javascript by Marijn Haverbeke (the entirety of which is available online for free).
Plagiarism
Everything is, in some sense, a remix. Plagiarism of concepts, code, compositions, samples and/or other elements is strongly encouraged, so long as you leave clear attribution within your code via comments. Ensure that anything you copy is in some way transformed, either by creating a variation on the copied elements or combining those elements with other copied elements. NOTE: transformation or combination (however subtle) is not a substitute for attribution, but rather a requirement for all copied elements.
ai policy
We’ve entered a new era of “Machine Learning” or AI. These algorithms are having (and will continue to have) drastic effects on every aspect of our society (including art). Today, artificial neural networks trained (often requiring enormous amounts of energy) on troves of data (which are not always ethically sourced) can make “predictions” and generate “hallucinations” (often with clear biases) that would have seemed like impossible sorcery just a few short years ago. In certain high stakes applications this can save lives, but it can also destroy them. In other contexts this biased hallucinatory predictive sorcery can be quite exciting, as is the case with media art. This technology, like many others that came before it (smart phones, the Internet, the computer) will most certainly change everything in our field, exactly how and to what extent is still anyone’s guess.
These talking machines are going to ruin the artistic development of music in this country. When I was a boy [...] in front of every house in the summer evenings, you would find young people together singing the songs of the day or old songs. Today you hear these infernal machines going night and day. We will not have a vocal cord left. The vocal cord will be eliminated by a process of evolution, as was the tail of man when he came from the ape."
John Philip Sousa (1906)
In the interest of collectively learning how to leverage its promises and minimize its perils, I encourage anyone interested to experiment with AI (beyond the tools covered in class) so long as you are transparent about what/when/how you use it and are willing to share your process/perspective on it in class.
Composer and sound designer, Mark Henry Phillips, on how AI music generators could fundamentally upend the industry, from WNYC's On the Media(Dec 27, 2024)
To be clear, when I say that we’ll be creating algorithmic systems that generate musical compositions I am not talking about artificial neural networks (like so many of the generative AI today) I’m talking about something much simpler, what we might call “classical AI”, meticulously and deliberately hand crafted code. We’ll be “writing” our algorithms, not “training” them as has become the norm in this era of machine learning. Our goal is to be very intentional and critical about the algorithms we write, we want to explore and experiment with the possibilities of the digital medium, to do so successfully requires a different starting point. Some of us may choose to incorporate specific neural networks as a part of our code, or maybe use a generative AI system to generate samples or an LLM to help us debug our code (more on that below), but at the end of the day it should be your bias that finds its way into the DNA of the systems you create (not the bias of a large foreign dataset).
Consider, for example, the difference between how Mark Henry Phillips, a musician and composer with a more conventional background, engages with AI as a collaborator vs how Holly Herndon, an artist and composer with a background in experimental new media, engages with AI as a medium in itself.
In this class the most likely use of AI will be code generation/evaluation using LLM (large language models), again you are encouraged to use these in ways that supplement your learning, rather than impede it. To ensure that is the case I ask that:
When asking an LLM for help with your code make sure to practice the terminology we’ve learned in class, use it as an opportunity to practice articulating your creative goals just like you would to another person. Consider starting with something similar to this prompt.
If an LLM generates any code (even a single line) that you do not understand, ask it to explain it to you before you copy it into your own sketch.
Consider sharing your code with an LLM and asking it for feedback, be specific about what you want feedback on (coding style, clarity, efficiency, creativity)
Any conversations related to a sketch should be submitted alongside the assignment on canvas, this can be done by either generating a share link on ChatGPT or using https://aiarchives.org (which works with other AI models beyond ChatGPT).
Attribution: Text and code written by Nick Briz. The code editor icons designed by Meko and licensed under Creative Commons Attribution License (CC BY 3.0). Sheet Music generated using ABC.js by Paul Rosen and Gregory Dyke. All sounds generated using the Web Audio API and/or Tone.js by Yotam Mann and other contributors.