- The Experimentation Machine by Jeff Bussgang
- Posts
- Custom GPTs | Jeff Bussgang | Leading with AI session
Custom GPTs | Jeff Bussgang | Leading with AI session
Classroom AI Tools Discussion with Jeff Bussgang
Jeff Bussgang: One of the things I like to do in my HBS classroom is try to intimidate the students by starting a minute early. When they walk in and say, "Oh my God, am I late?" and get all nervous, I shut the door one minute before class starts. It only has to happen once or twice, and then it never happens for the rest of the semester.
Welcome everybody. I'm Jeff Bussgang, a senior lecturer here at HBS. I teach in the second year, and I'm excited to have you in this session to talk about custom GPTs and taking these tools into the classroom. I'll talk about how we've applied them in the classroom in the last year, but one thing I'm struck by is how fast this is moving. My thinking about applying these tools changes quite rapidly. What I did in the fall semester, I made modifications for the spring semester, and I'm already thinking about next fall.
I'll take you on a journey of how my thinking has evolved in taking this information into the classroom. I'll say a little about myself for context, then we'll dive right in. I've left a lot of time for Q&A—I've got probably 20-30 minutes of content, leaving plenty of time for dialogue. Please feel free to ask questions at any point.
Briefly about me: I was class of '91 College, computer science. Believe it or not, I studied AI in the late 1980s—it was called neural networks and expert systems back then, and natural language processing. I then did a couple of years at BCG to learn PowerPoint, then came back to HBS and graduated in '95.
In 1995, I joined an internet startup. I was very excited about the first wave of the internet, and I've reflected on how lucky I am that I was able to start my career in that Internet 1.0 era, and now at this point, see this amazing AI revolution. I did two internet startups—the first went public in the mid-1990s, quite successful; the second was sold successfully. Both were venture-backed.
About 22 years ago, I co-founded a venture capital firm with a couple of friends who had previously been my investors. I co-founded Flybridge Capital, and that's my main vocation for the last couple of decades. Because of my technical background, I've been investing in AI companies for over 10-15 years. The firm a couple of years ago, became exclusively focused on AI investing, so my day job is to invest in AI software companies.
About 13 years ago, I joined the HBS faculty part-time. I teach two courses in the elective curriculum: "Launching Tech Ventures" (LTV), focused on pre-product market fit startups—I've taught that to about 2,000 students over 14 years, with three sections and about 250-300 students. The second course is "Venture Capital Journey"—a narrower community of 30 students who aspire to become venture capitalists after graduation.
I've written a couple books and I'm in the middle of writing a third about the impact of AI on product market fit, called "The Experimentation Machine: Finding Product Market Fit in the Age of AI." Ethan talked about the impact that AI is having on the toolset to be more efficient as a founder. I've been studying this both in my portfolio as I watch software companies leveraging AI, and in the classroom, coaching students on using AI tools effectively.
It's an incredible challenge writing about something moving so quickly. Someone invariably asks how to write a book about a toolset that will be out of date in a year or two. The best I can do is capture timeless frameworks and describe timely tools, recognizing that the tools will change.
Jeff: [To Scott Cook in audience] I'm thrilled that Scott Cook is here. I'll embarrass you a little if I may. Founder of Intuit, you created this incredible program, "Follow Me Home," which I talk about in my classroom. You invented this program of getting an empathetic view of the customer by following them home as they opened your software. Maybe say a minute about that program, and I'll give you the AI version 40 years later.
Scott Cook: Early on, we all did tech support—all of us, and all the engineers had to do at least three hours a month handling tech support calls. Even though we'd done in-house testing and usability testing, we'd seen lots of customers start up and use our product, which at that point was Quicken, there was something in the calls that was different. It sounded like there were things they didn't understand that we had seen them understand in our usability lab.
I decided we probably needed to watch them in reality rather than in our lab, to see them in their real environment with their finances. We got a local software store to pass out flyers whenever they sold our software, saying we'd like to come to your place and watch you take the box and start up the experience.
Oh my God, we saw tons of stuff we never saw in the more artificial environment of bringing customers into our building. The reality is just different when you're doing things with your finances and money. We called it "Follow Me Home" testing, and we still do it now to learn about new product areas or features—go watch real people do the real thing and just watch, not interview, not survey, watch.
We didn't know at the time, but there's a long tradition of that from the Toyota Production System. They call it "Gemba"—they have the leader go, don't depend on reports, don't read reports, don't listen up—just go to that part of the line, stand there and watch. We now do that internally as well, inspired by Toyota on our internal production processes.
Jeff: This concept of customer intimacy and watching customers use your product without assumptions—can AI synthetically enable some of these capabilities? Could you train AI to act like a user persona and describe your prototypical customer, your ideal customer profile, and infuse it with life, just like "Marketing Mary" was infused with life by HubSpot when they described that persona and put it all over their offices?
Could you engage with that user persona synthetically to get maybe 30% or 50% of the value? As the AI gets better, maybe it delivers 70% or 80% of the value. You can't have every employee follow every customer home, but you can have them all engage with an AI that mimics your ideal customer profile.
It's that kind of thinking I'm trying to push our founders to consider—using AI to leverage their journey of finding product market fit at every step, from customer discovery to ideation to designing experiments to finding pain points and value propositions to testing those propositions to developing prototypes and personas.
To bring this to life in the classroom, I wanted to bring our pedagogy to life too. When OpenAI came out with its API about a year ago today, I got excited—only a nerd would get excited about an API release—and thought about what I could do with this platform in the classroom.
I decided to create a personalized faculty co-pilot that I would bring into the classroom for my fall class. I called it "Chat LTV"—LTV is the course name, Chat LTV is the chatbot faculty co-pilot trained on course content. Then, when OpenAI released custom GPT functionality in November (toward the end of the fall semester), I started creating custom GPTs to provide personalized feedback for students based on their submissions.
When I showed this to the faculty in December, Kareem and Tre were excited—they said it might be the first time in Harvard Business School history that students got feedback on their submissions! Think about it—we never give feedback. At the end of the semester, we give a one, two, or three. You never get anything else in this classroom environment, you pay a lot of money for, so it's nice to get feedback even from an AI bot.
Chat LTV is a Slackbot embedded in the course Slack, available to students through two channels: publicly (where they could post in a channel everyone would see) or privately (as a one-on-one tutor). I used Slack because students already use it for assignments and posting reflections after class—they live in Slack.
When creating new products and interfaces, you have two choices: create a new interface and ask people to come to it, or embed yourself in an existing interface where users already are. I decided on the latter because I wanted to be where students already were. Being embedded in Slack was a fortuitous strategic choice.
I built the content corpus and trained it only on my content—my books, my 50 or so HBS case studies, selected blogs, teaching notes, etc. It was not allowed to know anything outside that corpus. If you asked who Donald Trump is or who won the World Series last year, they would have no idea. It only knew the content I trained it on, ring-fenced around my course.
This was important because students can go to ChatGPT for generic questions, but I wanted them to come here for answers related to the course. As those who've been through the HBS curriculum know, on the day of a case, that case is your total focus for those 80 minutes. I wanted students to be able to inquire about the cases and questions we'd discuss, to force them to prepare using the chatbot for classroom conversation.
With about 250 students, it was quite successful—we had over 3,000 queries during the semester. We launched after Labor Day and ran it through the fall semester.
Here's the quick architecture: We had a frontend content management system to ingest cases, books, and other content with flexibility to add new pieces as they developed. The backend had the Slack API integration and the indexing, and the content management.
One key architectural element was RAG (Retrieval-Augmented Generation), which is now standard but was newer last summer. It's how you train the GPT on specific content chunks. The system would pull content chunks relevant to the query from the vector database, then apply those chunks to the OpenAI API to get answers. This RAG architecture helps avoid hallucinations and improves query relevance. I didn't do any model fine-tuning—just used the standard API with my case library.
I thought I was giving students a cool tool, but I gave myself a gift: the ability to see into students' minds before walking into the classroom. I called it "student observability." Every morning before class, I would log into the admin portal and see what students were asking about, by student and by time.
For example, one student was a young woman from Southeast Asia, where English was her second language. She had come from a unicorn tech startup there. She was very quiet in the classroom, but one morning I saw she was asking incredibly thoughtful questions about a case called Sprout—key takeaways, how they could grow their top line, their risks, strengths, and weaknesses. I knew walking in that she was prepared, so I called on her, and she nailed it.
Another example: a student from a Fortune 500 company in IT. She knew technology but was unfamiliar with startup terminology, which showed in her queries: "What's OKR? What's CAC? What's inside sales?" She was struggling with terminology and acronyms. So I asked Chat LTV to give me the top 15 acronyms my students should learn about, and it provided a good summary of acronyms I use in the classroom. I also asked it to tell some jokes about the acronyms to make them memorable—that was about "dad joke" level, but still fun.
Another pattern I saw frequently: as many times as you think you're sharing important information about course requirements, grading, and assignments, students don't always hear you the first time. I'd get inquiries like "What are the guidelines for the project?" (which I covered in the second class), "What's the assignment we need to post in Slack?" (already posted), or "How can I schedule office hours with Jeff?" (just email me). So I created an admin content repository and trained the chatbot on it to answer these administrative questions.
Audience Question: A common fear my faculty colleagues had was that students wouldn't read the case—they'd just ask the chatbot to summarize it. Aren't you putting yourself out of a job? They might learn from the AI rather than interact with one another, which is so much of the learning in case discussions.
Jeff: I can't tell you scientifically what happened, but anecdotally, I felt students came in very well prepared. They did read the cases and asked the chatbot about the assignment questions I posted. There might have been some students who skipped reading and just got the summary to enter the conversation, but very few in my class.
The beauty of the case method is that you can't just give a case back in a case conversation. If someone just repeats the case, I'll be all over them: "Okay, great, but what would you do? Why do you believe that? What's your next step? What's your action plan?" They can't just fumble around with laptops or printouts—they have to have a conversation with fellow students.
Jeff Bussgang: My conclusion was that the chatbot is a way of advancing student preparation, but it forces faculty to be better at pushing forward the real case conversation and debate to get to the salient issues and surface the tough questions. I found that to be quite effective. Students had prepared, pre-processed, thought about the case, and used the chatbot. The chatbot isn't always right because it's non-deterministic. With startups, there's no right answer - it's incredibly subjective and very debatable. Every decision a founder makes is debatable. That's how I found the power and use of the chatbot was quite helpful. It's also a superpower for introverts and helps me see the introverts. Another funny thing - I think I'm always available to my students, but I'd go to bed at 10 or 11 o'clock, go offline, and think nobody's using the chatbot. Then I'd get back online at 6 or 7 in the morning, and there'd be like 30 queries between 10-11 at night and 2-3 in the morning. It's another lesson when faculty need to be ready to be dynamic and responsive to students, and having the leverage of AI can be quite helpful.
Moving to Online Courses
We took my course "LTV: Launching Tech Ventures" online last year as part of HBS Online. We've decided to create a tutor bot for the online course, which is happening now throughout HBS online courses. Launching next week will be a tutor bot for the next cohort, exposing it to thousands of students who can ask questions with context for where they are in the course. In the architecture, it goes outside not just the corpus of content related to the course, but with some guardrails, will access the generic chatbot. Since I won't be present, I wanted to have even more faculty-like coverage, which is what the generic chatbot can provide. If somebody asks about evaluating a startup idea or how investors evaluate founders in terms of founder-market fit, and I don't have that answer in my content, ChatGPT likely will. Blending the best of the corpus and the public chatbot is something we're going to experiment with.
Q&A with Audience
Audience Member: Since you open the platform before the case discussion, people are typing their questions or concerns beforehand. That gives you time to prepare, unlike in the classroom where they're putting you on the spot. Is that ambiguity, which is required for dealing with situations as they come, being taken away?
Jeff: It's a great question. It's part of the faculty's obligation to create some of that ambiguity and uncertainty in the questions. If you only follow exactly the assignment questions, you're not doing a good job as a faculty member. You want to dynamically go where the conversation is going, not just say, "It’s been 15 minutes, let me ask my second question, which I sent the night before." That's not how you teach. You've got to have a dynamic, organic evaluative conversation. It puts more burden on faculty to be dynamic, to listen carefully to students, and to go where the conversation leads.
Audience Member: You mentioned recognizing that the chatbot pushed the bottom 10-15% of students closer to the mean. Were you able to get any feedback on what it did for the top 5-10%? Did it push them into deeper insights?
Jeff: I felt like everybody stepped up. I thought we generally had richer case conversations, including the top students.
Audience Member: How did you fine-tune this chatbot when it was going out with third-party sources?
Jeff: We did a lot of content evaluation and testing, both with just my corpus and then also without model fine-tuning, meaning not changing the weights of the model. We would do testing to improve the prompts or the structure of the retrieval algorithms, so it would be considered prompt engineering rather than fine-tuning, to be precise. Audience Member: What's that nuance? I talk with developers and I'm trying to learn their language. Jeff: OpenAI has a model system. Their model has been trained with certain weights based on the language links they make between concepts. I don't change that model. If you're American Express building a fraud model, you might change that model because your application needs certain weighting. For this exercise, I might play with the prompts or change the content, but I don't change the model weighting. Audience Member: You showed that people were using this system between 10 PM and 2 AM. Did you notice any activity during the case discussion itself?
Jeff: No, and in part because I have a no-device policy. If I catch you on your device, I'll either give you a look that will make it awkward, or I'll send you a note after class saying, "You know my policy, I saw you on your device, don't do it again."
Audience Member: Would you consider relaxing that policy?
Jeff: Earlier, when Twitter was fun and novel, I did a live tweet experiment in one class. I said, "everybody, have your laptops for this class, we're going to live tweet during the discussion." I wanted everyone to engage in the conversation while also tweeting, to experiment with multitasking. It was fun. I might try it once with this chatbot as well, but I wouldn't do it for every class. Implementation Details
Audience Member: How difficult is it to create something like this? Is it scalable for people leaving this class to create their own?
Jeff: It's incredible how fast the tools are getting and how easy they are becoming. This took one engineer three months - pretty minor in terms of the heavy lifting - and that was a year ago. Today, you could do it on your own. There are many no-code tools out there, like Custom GPT, that you can use. You can throw in documents, create a custom chatbot, and it'll take you a couple of hours. The custom GPTs I created for personalized evaluators take me just 5-10 minutes.
Audience Member: I lead the customer service team at GoDaddy. One thing we're struggling with is deciding economically which open-source models to use. Is 80% accuracy worth it versus 60%? How do you decide what open sources to use in your super model?
Jeff: It's a great question. I only have 300 students, so I have a much smaller universe to work with. In my investment activities, we invested in a company called RC that's doing specialized or small language models (SLMs) addressing this exact issue. Most corporations don't want to pay for massive models that are expensive to run and manage. They want specialized language models. The RAG (Retrieval-Augmented Generation) cost differential is phenomenal. You're not relying on the LLM to do most of the work; you're relying on a vector database that generates relevant context. That's a problem that's been around for a while and is much cheaper to solve. Fine-tuning a model gets very expensive. RAG can be updated almost instantaneously, so for enterprise use cases, we've found RAG to be very cost-effective. Audience Member: How are you testing the accuracy? We're using it in a private equity context where accuracy is really important.
Jeff: These models are non-deterministic and not good at math, and my class is fairly analytical. I'll ask students about unit economics, LTV to CAC ratios, what IRR they'd put on something, similar questions to what you'd ask in a private equity context, but with a more early-stage flavor. ChatGPT did a pretty poor job answering those questions. Students had to do the work themselves to get the ground truth. These models are getting better - GPT-5 will be better at math, GPT-6 will be even better. It's a moving target. We had interactions where a student would answer based on what the chatbot said, and I'd ask, "How did you come up with that?" pushing them on the answers and ground truth.
Audience Member: With certain applications, you can get to the underlying source material quickly. We're seeing the exact location in the paragraph of an SEC filing, being able to click on that to self-serve. Audience Member: In India, there's a lot of Tamil literature, which is 2,000 years old, that has been digitized into HTML and PDF form. I built a platform called Orology for converting PDFs to mobile versions. How easy is it to ingest that content without a lot of coding? I want to ask questions like "What did somebody in the 12th century say about romance?"
Jeff: The systems are incredibly good at doing that right now. Custom GPT and other tools make this possible.
Personalized Tutors
In November, OpenAI came up with Custom GPTs. There are also companies doing custom GPTs - there's literally a company called "Custom GPT" and others doing similar things. In the standard GPT Pro subscription ($20/month), there's a custom GPT capability where you can train it on anything you want. When my students submitted their end-of-semester projects, I decided to train a custom GPT to evaluate and grade their projects. Imagine 250 submissions - that's a lot for any faculty to review. Most faculty don't write essay feedback to students, but I created this custom academic assistant to review papers. I trained it on previous years' strongest papers, gave it the project description and grading rubric, and asked it to be critical of both the final paper and the startup idea.
I've noticed GPTs are too nice, so I always add to my prompts: "Be critical, this is a rigorous course with high standards." I ask for three things the AI liked and three things it didn't like about each submission. In the spring semester for Venture Capital Journey, I did this for every assignment. Students submit assignments every week or two, and I created a custom GPT per assignment. For the investment thesis assignment, I trained it on the best investment thesis blog posts from venture capital firms and created a GPT to give feedback - three things it liked, three things that could be improved, and an overall grade. I sent these evaluations to each student. For the final project, 250 emails went out with this feedback. For weekly submissions, I'd run it through the custom grader and send out the feedback.
Audience Member: Did the students know it was a tutor?
Jeff: Yes, I told them what I'd done, gave them the prompt, and shared the URL if they wanted to play with the custom GPT themselves. Audience Member: Did your grade agree with the AI's grade? Jeff: About 70% of the time. For grading, you want to be 100% accurate. I found the GPT's grade is too easily fooled by length and better language, not necessarily good conceptual insights. I graded everything myself, but I wanted students to see what the GPT reacted to.
Audience Member: How was the quality of the feedback and coaching?
Jeff: Very good. Students found the feedback helpful. Part of it is because I have a rubric and trained the system on what good work looks like. If they didn't follow the rubric, the grader was quite good at giving feedback on what was missing. Audience Member: Is it optimizing for following the framework or for coming up with a great idea?
Jeff: It's evaluating framework adherence, not necessarily judging if an idea is good or bad. I wouldn't use the evaluator to make investment decisions, although I've been experimenting with that in our firm. As a sidebar, last week I wrote an investment memo on a follow-on investment we're making in one of our portfolio companies. I've been playing with an investment memo evaluator and decided to do red team/blue team analysis. In investment committees, you often have groupthink where everyone gets excited without thinking deeply about alternatives. For the red team version, I asked them to review the memo critically and tell me all the reasons why we should NOT make this investment. That was quite effective because it forced our thinking about the risks and whether we could still talk ourselves into the investment despite them. It's good at playing devil's advocate.
Faculty Adoption
Audience Member: In 1983, I had a company training executives on new PCs using what we called the "electronic case method." My experience was that the faculty didn't uniformly respond well to innovation or new technology. It took time before people were comfortable. What kind of response is the faculty giving to your innovation?
Jeff: It's been incredibly positive. They asked me to present at two different faculty gatherings - one AI-focused, self-selecting group and one for the entire faculty. I had many inquiries afterward from people wanting to use this in their classrooms. As you saw this morning, Mitch Weiss, Kareem, and others have been very active in pushing AI usage among faculty. Not everybody is adopting or embracing it perfectly, but the school is doing a good job. Faculty recognize that students are ahead of them, and if they're not careful, they'll lose the respect and focus of students. It's a good incentive - they want to stay relevant.
Future of Education
Audience Member: If you project this forward, you'll be dealing with AI that's better than your best student. It will make connections better than the best student can. How does education change when the AI companion is better than your best student? What are we going to be teaching and how? Jeff: My next assignment is to have students create an AI co-founder over the semester. It will be a multi-step process. Last year, their first assignment was to use ChatGPT to come up with a startup idea and a series of experiments to test that idea, then reflect on how good ChatGPT was at doing that. Now I'm thinking about having them create a co-founder and train it to become a domain expert in whatever area they choose. Then they'll pick a user persona and ask the AI to become an expert in that persona. They'll do customer discovery, needs analysis, and requirements analysis with the help of their AI co-founder and AI user persona. Then they'll create mockups and prototypes, and finally reflect on the experience, what they learned, and what the weaknesses were.
Q&A Session with Jeff Bussgang
Jeff: I think where we're heading is training students to be architects and users of these tools, to be expert utilizers of them, much like they're expert utilizers of laptops and phones. I'm not an accounting professor—I operate in a non-deterministic world of startups. I don't believe AI can answer questions perfectly because I don't think anyone can answer perfectly about what go-to-market choice a founder should make when trying to go from zero to one. That's simply not a knowable question, even with the data we provide in case studies. That's why we debate to illuminate why certain go-to-market choices work under certain circumstances. AI can't deterministically decide the right amount of money to raise. We can look at projections and talk about tradeoffs and solution cap tables, but I think we're pushing students to use these tools well.
Audience Member: Are we looking at a potential collapse of the case method, though? If everyone has bots available that can get to the point we try to reach with the first two-thirds of a case discussion—getting all issues on the table and shaping those issues—isn't there a risk? Or maybe there's a way to accelerate deeper conversations?
Jeff: Time will tell. I'm very humble in that I have no idea how this is going to evolve each semester. But my experience so far is that AI has gotten us to the depth of the issues faster and accelerated our students' understanding more effectively.
Audience Member: My startup partners with Jack and Loke on research about this. We use a similar approach, but for career development and recruitment. We evaluate all the AI answers and train the system through peer learning. The value-add is that we might know everything initially at a 90-95% level, but when users submit responses, we give feedback to everyone, and they continue to learn from each other. That's our solution to remain relevant.
Jeff: That's interesting. I'm reading David Epstein's book—has anybody read it yet?
Audience Member: Half of it.
Jeff: I'm about halfway through as well. One thing that struck me is this notion that what's human is how you relate to someone, developing peer relationships, coaching relationships, mentorship, teamwork, and collaboration. As a school, we'll have to lean into that work because AI won't replace that. That's what's special about forming organizations and productive collaboration.
Audience Member: I also attended a consulting firm to learn PowerPoint for some time. I'm curious if the advancements we've seen in learning environments have been similar on the advisory side. The hypothesis-driven approach that McKinsey uses—where partner groups suggest three approaches, evaluate them, and try to refute two to see if the remaining one is best—is a stochastic methodology that's hard to implement in industry. Have you seen trends in advisory where client interaction becomes more stochastic?
Jeff: It's not my field, but in my investment activities, I've been evaluating many "AI for X industry" companies. I've invested in AI in legal, accounting, financial reporting analysis, and a couple of AI consulting companies. My conclusion is that we've hit peak consulting employment. Consulting firms will become way more efficient with AI. Just like Martin Sorrell said today about media planners—we don't need 50,000 of them anymore—it's hard to imagine needing 50,000 25-year-old consultants doing market research, analysis, and generating slides. We may still need strategic thinkers who can use these tools effectively, but I can't imagine we'll have the same churning of data, analysis, and slides that we've had historically.
Audience Member: I have a question about the classroom and young professionals. Research shows that when students use AI, they don't retain information as long. I teach DCF analysis and make students do it by hand because I don't want them using an IRR function without understanding what they're doing. But students ask, "When will I ever do this manually? I can always prepare with tools." I tell them about investment committee meetings where they need to generate ideas on the spot. How do you motivate students to know things beyond prompts?
Jeff: Great question. At Harvard Business School, I use the case method. What would you guess is the most motivating thing in the HBS classroom?
Audience Member: Winning.
Jeff: Winning and interactions—and not looking stupid in front of peers. When I mentioned the two ways to interact with ChatGPT—public or private—what percentage do you think chose private?
Audience Member: 95%.
Jeff: Exactly. I put students on the spot with cold calls throughout case discussions because I want them in situations where they can't have a bot answer for them. "You got the answer—why? Walk me through your calculation. What's your turn rate? What's your assumption for X?" Their motivation is that they don't want to look stupid.
Audience Member: Is there an ethical question in terms of seeing the questions they've asked the bot? That must color your impression of them.
Jeff: I told them I would see what they asked the bot, so they knew that going in. I didn't have a single student complain about it.
Audience Member: To extend her question—if people don't do that unit work, how do they progress to higher levels? If you don't have junior consultants, junior engineers, or media planners because AI generates that work, how do people eventually progress to strategic decision-making mindsets?
Jeff: That's a great question. I don't know if I have an answer for that.
Audience Member: What is the feedback mechanism? I imagine each semester you're adding additional papers or responses, but is there a human feedback process that evaluates responses and says, "I agree with all these points except this one"?
Jeff: I didn't evaluate each response for each student—I spot-checked through sampling.
Audience Member: For custom GPTs generally, is there a mechanism to get human feedback?
Jeff: Automatic evaluators are being created now—you have one model generate a response and another model evaluate it. That's the architecture being implemented for the new tutor bot we're creating. It improves quality somewhat, but still requires human intervention. Previous Audience Member: For our early systems, we're testing one answer against another and letting users voluntarily provide ratings and feedback. At the end of the day, it's about motivation and needs. Our use case is career development, with thousands of students from ten countries. Those who want something specific work harder. If I'm taking a finance course but don't want to be an investment banker, I might not bother learning deeply. But if I know I have an investment banking interview after graduation, I'll make sure I understand the material.
Audience Member: On the strategic motivation—we know we all learn through feedback. What interests me is that you gave your students more feedback than they'd ever had before. If you used this after each cycle of assignments every week, did you notice a change in the quality of learning over the semester? That could counterbalance the loss of the apprenticeship method we're discussing.
Jeff: I thought my students in the Venture Capital class, where I did weekly feedback, were way ahead of where previous years' students had been by the end of the semester. Now, it may be they were just a better group of students—there's always variation—but I thought they were significantly more sophisticated.
Moderator: We have time for one more question.
Audience Member: When you mentioned the source content, should it be a Word document or HTML?
Jeff: Any one of those. Go to custom.gpt.com and you'll see how to do it.
Audience Member: Is the architecture proprietary that you showed?
Jeff: It's proprietary, so maybe when HBS releases the tutor bot...But I was just sharing it as a learning example.
Jeff: Well, I'm happy to stick around and answer more questions. Thank you all very much.