Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Activity

Welcome

Unplugged, Really

This workshop is unplugged. We have no slides. We will explain though demonstration after a little bit of setting the stage.

This workshop is not designed to give you as much information as possible, instead, it’s designed for you to get really comfortable with a few key ideas. To really absorb them. To hold them. For them to stay with you. To sit with a few ideas long enough that you let them shape your future choices.

Literacy is liberatory afterall, we want you to feel a bit of liberation.

So, we invite you to really unplug:

What is a AI?

Currently the biggest advance in AI is LLMs. Some are “vision” models or “audio” models, but they all take the same basic idea.

Let’s break down that term.

Models

A model is a simplification in order to communicate an idea. In ML, our model describes our assumptions about the world that we use to find patterns in the data.

Some examples:

In computing, our model are always a mathematical model. In ML, typically a statistical model.

Here w

Language

Large

Putting it together

Over time, ML has done many different models, but the one that happens to have hit is a generative language model.

It starts with a simple assumption:

We can generate sequences of words by sampling from a distribution of what word comes next given a sequence of past words.

Informal
Formal

Specifically: we assume that the distribution of the next word given the previous words is all we need

P(nextprevious)P (next| previous)

A simpler Model

LLMs require a lot of parameters because spoken language is complex. There are a lot of words and meaning can be conveyed over long strings of words.

So, LLMs use an deep neural network to implement that distribution in a computer.

However, if we use a very simple language, and for now, only the one previous word then the equation can be represented by a table.

Even better, we can do random with physical objects.

So, we will implement this distribution physically.

Sampling a Pretrained Model

Sampling procedure

  1. A helper posts a sticky in the color of the prompt on the board

  2. The facilitator starts in the bin labeled with the prompt and draws a ball

  3. A helper adds a sticky in the color of the drawn ball

  4. The facilitator draws from the bin of the last sticky in the document

  5. repeat until you draw a white.

Training a Model

Discussion

Second Training and Reinforcing Ideas

Closing