Cover photo

Comfort with Context Windows

How much is *too much* for an AI to know about you?

I've filled up ChatGPT's available space of memories about me. What happens now?

Over the past six months, my prolific use of ChatGPT (among other tools) has made it feel like AI is my most consistently available and dependable colleague. Since I work across so many projects, ChatGPT is the only entity who carries context with me across all of my different windows of work.

As a result of this over-reliance on AI, I've had to sacrifice personal boundaries for the sake of productivity and moving quickly. Which is why it now knows everything.

ChatGPT knows about my obsession with AI productivity and optimizations. It knows about my work in crypto. It knows my passion projects around community and education. It reads every blog post because it helps me edit nearly every line. It helps me translate and parse messages from my doctors and my colleagues, write emails to negotiate new project proposals or lease agreements with my landlord, and it talks me off the ledge about my perpetual state of career crisis.

Here's a glimpse at some of the details that ChatGPT has committed to its memory about me.

A few examples of what ChatGPT learns about me after so many months of such prolific use. Its memory is now full, so the only way for me to encourage it to make more memories is to delete old ones.

But ChatGPT is not a human. Unlike a person, I have complete control over what memories it keeps or deletes—a feature of OpenAI’s product that I find both fascinating and unsettling. On one hand, the ability to prompt AI to "forget" personal details about my husband or children is reassuring. On the other hand, asking it to "forget" critical nuances, such as feedback or challenging aspects of my personality, feels uncomfortably like shaping a version of reality that flatters my own biases—a bit like creating a cloud of self-delusion for it to carry me on.

Yesterday, I shared a video demonstration showing how I used AI to create a resume. In the video, I downloaded my LinkedIn profile as a PDF, uploaded it to ChatGPT, and asked the AI to generate a resume tailored to a job description I found on the spot. Amazingly, ChatGPT not only used the details from my LinkedIn PDF, but it also added contextual information, creating bullet points about past roles and achievements—even including details that weren’t explicitly listed on my LinkedIn profile. This is because it carried context with me from many other threaded conversations over the course of the past six months.

ChatGPT is also learning how I like to learn. Earlier this week, while giving a ChatGPT 101 course to my mom and a few of her friends, we all tried the same experiment: we asked the interface the exact same question, “Can you help me explain an LLM to my mom?”

Unsurprisingly, the responses we each received were different (such is the generative nature of AI). But what surprised them was how much more nuanced my response was. It leaned heavily on an abstract metaphor, comparing LLMs to an internet librarian who selectively stores materials for future reference.

I know this is because my personal instance of ChatGPT has learned that I grasp concepts best through metaphors, examples, and comparisons. If I were to start fresh with a different AI-based learning platform, I’d have to either carry that preference over somehow or start from scratch, hoping to rebuild this personalized learning “brain” all over again. This is my lock-in effect for AI right now. The AI that knows the most about me carries the most power.

I do worry about the somewhat scary possibilities that exist to share so much deeply personal context and information with an AI application that is keeping all of this data stored on cloud servers. I also worry how much context I am comfortable with AI knowing about my own children as they grow up. And I wonder if there will be a better way for me to get all of the benefits of this hyper-personalized access and recommendation engine, without the ever-present fear that I'm looking over my shoulder for the big tech company to use all of this against me one day.

For all of my other bot followers or fellow AI addicts out there, I wonder... what do you think?

When it comes to personal boundaries with AI, it begs the question...how much is too much? (image source: DALL-E)

Loading...
highlight
Collect this post to permanently own it.
Hard Mode First logo
Subscribe to Hard Mode First and never miss a post.
#ai#technology#productivity#data#boundaries