Michael Mulet's ai-by-ai logo

Welcome to "ai-by-ai.com"

This is the site where I make AI that makes AI.


I'm Michael Mulet, I'm currently working on a project called "ai-by-ai" where I use Artificial Intelligence to make Artificial Intelligence.

Just below, you'll find the first of these AI. I've named it Spudmund Freud. It is a chatbot based on the mother of all chatbots, ELIZA.

ELIZA is not as smart as today's AI models. All of its responses are pre-written. What makes Spudmund Freud different is that it uses a neural network to generate both it's patterns and responses, all 1,000,000 of them. It's ELIZA on a scale never seen before*

Try it out below! Spudmund Freud will play the part of a "RoboTherapist", asking about your feelings and your problems, but do keep in mind that this is not a real therapist. Afterwords, read on to learn more about the project and what makes it work.


*If anyone knows of a larger ELIZA, please let me know, I'll scale this one up to match it.

Source on Github

Video Call
Spudmund Freud:
Transcript:
text speed:
1x
Send

Thanks for playing with the chatbot! If you enjoy this kind of stuff, follow me on GitHub or even sponsor if you want to support my work.

Check out my other stuff:

How and why

Michael Mulet - March 02, 2023

One of the main reasons we have usable A.I. today is because of the observation in the early 2010s that we could use GPUs for parallel processing, which greatly accelerated the speed of deep learning algorithms. This breakthrough allowed researchers to train much larger and more complex models in a reasonable amount of time, leading to significant advances in computer vision, natural language processing, and other areas of AI. The current trend is, the larger and more complex the model, the better it performs. In other words, the deep learning models scale up very well.

The point of "ai-by-ai" is to use modern deep learning techniques to revisit the old school A.I. field. Things that were once considered impossible, like making chatbot with a 1,000,000 hand-coded responses in a day, are now possible. So, I'm starting with ELIZA, one of the first chatbots, and scaling it up, just to see what happens. Here's how I did it:

First Stage: ChatGPT

First, I prompted ChatGPT to generate a javascript version ELIZA. The whole point of this site is "ai-by-ai", so I wanted the A.I. to generate as much code as possible. Here is my exact prompt and response

Make a javascript version of the famous ELIZA chatbot

Predictably, it said "sure", and produced a short biography of ELIZA and this code:

const eliza = {
  patterns: [
    { pattern: /hello/i, response: ["Hello!", "Hi there!"] },
    { pattern: /how are you/i, response: ["I'm doing well, thank you for asking.", "I'm fine, thanks for asking."] },
    { pattern: /what is your name/i, response: ["My name is ELIZA.", "I'm ELIZA."] },
    { pattern: /i am (.*)/, response: ["Why do you think you are $1?", "How long have you been $1?"] },
    { pattern: /i feel (.*)/, response: ["Why do you feel $1?", "What do you think caused you to feel $1?"] },
    { pattern: /i need (.*)/, response: ["Why do you need $1?", "What would happen if you didn't get $1?"] },
    { pattern: /i want (.*)/, response: ["Why do you want $1?", "What would happen if you got $1?"] },
    { pattern: /i think (.*)/, response: ["Why do you think $1?", "How long have you thought that $1?"] },
    { pattern: /yes/i, response: ["I see.", "I understand."] },
    { pattern: /no/i, response: ["Why not?", "Are you sure?"] },
    { pattern: /(.*)/, response: ["Can you tell me more?", "I'm listening.", "Please continue."] }
  ],
  //some processing code here, look at this site's source code if you want to see it

As you can see from above, the general form of the program is a regular expression and a list of responses. Pretty simple. In order to scale, I just needed to make more of the pattern/response pairs. I got ChatGPT to generate about 500 of them before I reached the rate limit.

Second Stage: GPT-2

To get to 1,000,000 responses, I fine-tuned gpt-2 on the 500 patterns and responses made by ChatGPT.

(Note: I did this on February 28. On March 1st, literally the next day, openAI announced the ChatGPT API. I will definitely have to revisit this project, using ChatGPT the whole way).

About 50% of these generated patterns and responses had syntax errors and were unusable. I ended up using tree-sitter to parse the javascript and extract the patterns and responses. It worked really well because tree-sitter is pretty tolerant of errors, so an error in one pattern/response pair wouldn't cause the whole file to fail to parse. I had to generate about 2,000,000 patterns and responses before I had enough to get to 1,000,000 usable ones.

In the end I generated about 1,000,000 usable responses and about 400,000 patterns. Most patterns have only 1 response, but some have more. Like /dairy/ had 600 responses for no reason, (it wasn't even in the fine-tuned data).

Then came the boring part of cleaning up the generated responses. It's a known fact that GPT-2 can be very very naughty.

After that, I had to write the code to speed up the program. 350,000 regexes were a bit too slow. Fortunately, 99% of the pattens were of this form:

const pattern = /some words to match (.*)/i

A matching substring, followed by a catch all. So, instead of doing a regex match for every pattern, I just constructed a tree of the words to locate matching strings. Here is an example:

//Instead of this
const regex1 = /I feel sad (.*)/i;
const regex2 = /I feel (.*)/i;
const regex3 = /I feel angry(.*)/i;

//I do this
const tree = {
  "I": {
    tree: {
      "feel": {
        tree: {
            "sad": {
              responses: [regex1],
            }
          "angry": {
            responses: [regex3],
          }
        },
        response: [regex2]
      }
    }
  }
}

Combined with a few more tricks for size, I compressed the JSON down to about 5MB gzipped, and it's much faster (win-win)

That's about it for the code.

AI usage

Other than the above, I didn't use any other A.I.'s. I probably should have used a generative model to make Spudmund Freud, but I had an image in my mind of what I wanted it to look like. So, I just made in blender myself and I got the textures (including the library and the chair) from textures.com

Conclusion

Did it work? Somewhat. Scaling up the number of responses did drastically improve the pattern match rate. Whatever you had to say, there is a good chance it has a response for it. The problem is that the responses are not very good. I think GPT-2 is the weakest link here, it's just not very good at generating coherent responses. For the next iteration, I will definitely use ChatGPT to generate the responses.

Also, GPT-2 would often generate a response that ends in a question like "Would you like to know more?", but the implementation has absolutely no memory, so it would never follow-up. In the data I fine-tuned it on, there were no responses that ended in a question, so this is a problem with GPT-2.

Finally, a lot of the responses follow a set pattern like "$BLANK is a popular gift", which is definitely a symptom of over-training. The problem was, when I under-trained it, it didn't produce topical responses, so it's a balancing act.

Bonus

Following the psychology theme, I gave the A.I. a complex, see if you can find it.