AI as Intern: how GPT4 helped me analyze 300+ post-it notes

I recently listened to a Vergecast episode featuring Laura Mae Martin, who shared an interesting perspective on the evolution of generative AI. She likened its current state to that of an intern—capable, but not quite refined. Over time, she speculated, it could evolve into an assistant that anticipates needs, and perhaps even a partner that co-creates and suggests ideas. That “intern” analogy has been on my mind recently as I embarked on a new project: analyzing post-it notes from a library-wide summer retreat.

During the retreat, we conducted various exercises and ended up with hundreds of ideas, suggestions, and questions—all jotted down on post-it notes. We took photos of these notes, and I decided to see if ChatGPT 4o could help transcribe them. Initially, I uploaded each image with 30+ notes, but the transcription quality wasn’t great—perhaps the system was overwhelmed, but it felt as if GPT was putting in low effort. So, I tried cropping the images, reducing the number of notes per batch. Success! With fewer notes, the system processed the data more effectively, but I still noticed some missing or incorrect words, likely due to the variability in handwriting. Despite this, the transcription was about 70% accurate, which wasn’t bad but still required manual verification and edits.

We had several pages of notes like these.

As tedious as it was, reviewing each note allowed me to become deeply familiar with what my colleagues had shared. I learned a lot about their concerns, aspirations, and what was on their minds. It was like a massive brain dump—one that deserved careful attention.

Another aspect of the retreat was a voting exercise, where participants used stickers to highlight the ideas that resonated most with them. GPT handled this fairly well, accurately counting the number of stickers per note about 80% of the time.

The AI struggled with stickers that overlapped more than one note.
Only one vote per dot, please! :)

Next, I compiled the results into a spreadsheet with over 360 individual notes. With all the items accounted for, I asked GPT to help me cluster and categorize them. While it did okay, it missed some obvious connections—such as failing to group notes about doors, bathrooms, and the loading dock under the category of "facilities and physical spaces." I tried different approaches—sometimes providing the categories myself, and other times inviting the AI generate categories itself—but it was still lacking.

In the end, I decided to migrate and manually sort the notes in Miro. Although this was time-consuming, it deepened my understanding of the material. Handling each note forced me to think about them in a more meaningful way, and the process sparked numerous questions for future conversations.

First I had sorted the notes into broad categories. And then I refined them further into subcategories, creating 30 clusters.

I exported these from Miro into PowerPoint for easier sharing. The goal was to transform the large pile of notes into more manageable groupings, making the information more digestible and encouraging discussion.

Conceptually, it was interesting to place certain notes side by side to see what connections might emerge. It felt like I was inviting the post-its to "converse" with each other, with me acting as the facilitator.

Although GPT had some difficulties with transcription and categorization, it still provided valuable insights in other areas. For instance, I used it to perform a sentiment analysis on the large spreadsheet of notes. After a few prompt adjustments, GPT was able to generate its own set of sentiment categories and assign each of the notes accordingly. While it wasn’t perfect, the analysis gave me a useful overview of the general tone and feelings behind the feedback, helping to identify both positive and critical themes.

Next, I turned to the AI to summarize the smaller selections of notes, generate conversation sparks, and offer thought starters. This time, it worked well. In just 15 minutes, I had summarized all 30 clusters, complete with questions to explore further.

One final task I gave GPT was to help identify 50 "quick wins" from the 300+ notes—actions we could take to address immediate needs or opportunities. I also asked for reasoning associated with each idea.  

After refining the list, I asked the AI to sort these “wins” by ease of implementation, though some results needed to be taken with a grain of salt. For instance, it labeled creating student-friendly orientation videos as "easy," which is true in terms of technical production, but overlooks the complexity of content development and distribution.

The point, however, wasn’t to rely on GPT as a consultant, but rather as an intern—helping me sift through ideas and spark new thoughts. After sorting the “wins” by ease, I asked next to rank them by strategic value. Again, it provided categories—highly strategic, moderately strategic, and minimally strategic—with justifications for each. This led me to develop my own takeaway sheet informed by all this analysis.

Conclusion

While I could have done most of this without GPT, the tool provided significant value by speeding up processes, generating ideas, and allowing me to approach the data from different perspectives. The combination of generative AI, Excel, PowerPoint, and Miro transformed what could have been a mundane task into a more collaborative experience. GPT’s limitations—such as struggles with handwriting recognition and some clustering inaccuracies—were balanced by its strengths in helping to synthesize data, generate questions, and provide useful categorizations.

The value wasn’t just in saving time; it was in how the process deepened my engagement with the ideas and thoughts my colleagues had shared. As I moved through each stage—transcription, sorting, clustering, analysis, presentation —I developed a broader understanding of the collective mindset across our library, along with a greater sense of empathy for the diverse perspectives and concerns expressed. The result was more than just a categorized list of notes; it felt like an evolving conversation, fostering a deeper connection with my colleagues, and with the AI.

Postscript

When I was doing various iterations of the categorization phase, ChatGPT 4o was generating different files for me. I kept going back and making tweaks based on the output. At one point, it started naming the files it was creating for me, and amusingly, it titled one as "final" even though I wasn’t done and still wanted to iterate more. I took it as a hint that maybe my intern needed a break.

Previous
Previous

Woven Together: the looms across CMU

Next
Next

The Invisible Scaffolding: my ode to spreadsheets