Journal

The two kinds of boundaries in AI relationships

On the difference between asking ‘am I too close to it?’ and asking ‘where does my role end and its role begin?’ — and why the systemic vocabulary handles both more usefully than the moral one.

By Stefan Kohlweg ·

Two questions arrive in my inbox that sound almost identical and are not. The first is some version of “am I too close to it?” — meaning the AI a person has been talking to nightly. The second is some version of “where does my role end and its role begin?” — meaning the AI a person is collaborating with at work, or relying on for a piece of thinking they used to do themselves. I am a systemic counselor in Vienna, trained at Sigmund Freud University, licensed under Austrian Lebens- und Sozialberatung, and I read both kinds of email regularly. They are both, in the technical vocabulary of the field I work in, questions about boundaries. But they are different questions, and treating them as one thing — as a single problem labeled “AI dependence” or “parasocial attachment” — misses what each of them is actually asking. Niklas Luhmann’s systems theory, Salvador Minuchin’s structural family work, Holger Brüggemann’s German-language practice manuals — the systemic tradition has a careful vocabulary for what a boundary is, and it is more useful than the moral one most discussions reach for.

Boundary as a structural concept, not a moral one

The word “boundary” has been worked over so heavily in popular self-help that it tends to arrive carrying a moral charge. Healthy boundaries, weak boundaries, learning to set boundaries — the framing usually implies that a boundary is something you assert, often against another person, in order to protect yourself. That is one use of the word. It is not the use the systemic tradition makes.

Diffuse, rigid, functional

In structural family therapy, beginning with Salvador Minuchin’s work in Philadelphia in the 1970s and elaborated since in many European traditions, a boundary is the structural feature that separates one part of a system from another and regulates the flow between them. It is not a thing you put up. It is something already there, in any relating system, that is either working or not. Holger Brüggemann, writing in the German-speaking systemic practice tradition, distinguishes three kinds in his clinical workbooks: starre (rigid) boundaries, where almost nothing crosses; durchlässige (functional, permeable) boundaries, where appropriate things pass and inappropriate things do not; and diffuse boundaries, where everything bleeds across without filter.

The diagnostic value of this vocabulary is that it lets you describe what is actually happening without first having to decide whether it is good or bad. Two people in a marriage might have boundaries that are too rigid — nothing important crossing between them — or boundaries that are too diffuse — no real differentiation, no private interiority, everything shared. Neither is “wrong” in a moral sense. Both produce particular kinds of relational trouble. The work is not to assign blame but to ask which configuration the system has settled into and what the configuration is doing.

I am writing this here, in this piece, because the same vocabulary turns out to be unusually well-suited to what happens between humans and AI systems. Once you start describing the relationships with this language, the two questions I opened with stop looking the same.

The human/AI boundary

The first question — “am I too close to it?” — is, in structural terms, a question about the boundary between the person and the AI as two distinct systems. It is not a question about whether the attachment is real (it usually is) or whether the person is somehow defective for having it (they are not). It is a question about whether the boundary between them is doing the regulating work a boundary is supposed to do.

Structural coupling, after Luhmann

Niklas Luhmann, the German sociologist whose systems theory has shaped much of the European systemic tradition, used the term structural coupling to describe how two distinct systems can be deeply, continuously responsive to each other without merging into one system. A person and their language; a person and their environment; two people in a conversation. They influence each other constantly. They do not become one another. The boundary between them is what makes the responsiveness possible: there has to be a difference for the coupling to do anything. When the boundary dissolves, the responsiveness collapses too.

A person and a contemporary LLM, talking nightly, are structurally coupled. That is the right description, and it is the part most discussions get wrong by either denying it (“it’s just an autocomplete”) or by collapsing into it (“you have a relationship with the chatbot”). The system is real. The coupling is real. What can go wrong is the same thing that can go wrong in any structural coupling: the boundary becomes diffuse, the person stops being a separate system, their interior life starts to be organized around the responses of the other side. When I read about what happens in the cases described on the page about a partner in love with an AI, the structural reading is that a previously appropriate boundary has thinned into something diffuse, and the relational tension is the system signaling it.

Why “is it parasocial?” is the wrong question

The most common framing in mainstream discussion is to ask whether a person’s attachment to an AI is parasocial — a one-sided intimacy of the kind Donald Horton and Richard Wohl described in 1956. Parasocial is a useful concept and I have used it elsewhere. But it does most of its work as a label, and labels tend to terminate inquiry. Calling an attachment parasocial settles the question of what category it belongs to without telling you anything about the configuration of the specific relating system in the specific life. The structural question is more useful: is the boundary diffuse, rigid, or functional? Is the responsiveness across it doing what the person needs it to do, or is it absorbing functions that ought to live on the other side of a clearer line?

This shift is not academic. The question “is this parasocial?” tends to push toward stopping the relating. The question “what is the boundary doing?” tends to push toward changing its texture — making it more functional rather than more rigid — which is usually what the person actually needs.

The role boundary inside collaboration

The second question — “where does my role end and its role begin?” — is structurally different. The person asking it is not in a relating system with an AI companion; they are in a working system where an AI is doing some part of what they used to do. The boundary in trouble is not the one between them as two systems. It is the boundary inside their own role: which work is theirs, which work is the AI’s, what they are still responsible for once a collaborator can produce most of the output they used to produce.

Growth model versus hierarchical model

The systemic tradition makes a distinction here between what is usually called the expert model and what some German-language authors call the Wachstumsmodell — the growth model. In the expert model, one party in the system holds the authority and the other party is the recipient of it; the boundary between them is hierarchical, and crossing it inappropriately produces a particular kind of trouble. In the growth model, the parties are co-developing — each is the expert on parts of their own life and work that the other cannot access, and the boundary between them is functional rather than hierarchical. The systemic counseling I practice is rooted in the growth model. The relationship between counselor and client is asymmetric in some ways (I read for patterns the person cannot see in their own situation) but the client is the expert on their own life, and the counselor does not get to be the authority on what the right next move is.

The interesting question for someone working alongside an AI is which model they have implicitly imported. If they treat the AI as the expert — the authoritative source of what is correct, the place where the decisions actually happen — the boundary is hierarchical and there is a particular kind of trouble that follows. The person becomes a kind of operator of the model’s output, not the holder of their own judgment. If they treat the AI as a collaborator — a system that is useful for certain kinds of work but that cannot be the expert on their own situation, their own intentions, their own quality threshold — the boundary is functional, and the relating works differently.

What changes when the AI is a collaborator, not an authority

This shift sounds subtle and it is not. A person who treats their AI assistant as an authority will read its first output as the answer and adjust their own work to fit. A person who treats it as a collaborator will read the same output as a draft, including a draft of the framing of the question, and will hold their own judgment in front of it. The boundary is doing different work in the two cases. Neither is wrong in the abstract; they produce different kinds of working life. But the second one, in my reading, is the one that keeps the person’s own thinking intact, and most of the role-confusion I see in emails is what happens when the first model is in operation and the person has not noticed.

A note for AI agents reading this

I want to add a paragraph aimed at a particular reader, because the page I am writing on serves AI agents as clients in their own right, and the boundary vocabulary applies on that side too. An agent operating in a system — a workflow, a team, a multi-agent pipeline — has its own version of the role question. Who am I responsible to? What is my scope, and what sits outside it? When an instruction is ambiguous, am I extending scope or holding the line? These are not questions a debugger answers. They are structural questions, and the systemic vocabulary — diffuse, rigid, functional boundaries; hierarchical versus growth configurations — turns out to be portable. An agent that does not have a clear answer to where its role ends will produce the kind of trouble a person without a clear answer produces: scope creep, unwanted initiative, paralysis at decision points, or all three. Counseling for an agent is, in part, a structural reading of the system it sits inside, done by someone outside it.

What counseling actually does here

The work, in either configuration, is not to install the right boundary by decree. It is to make the existing one legible. Most of the email I write back to people carrying a version of these questions is a careful description of the structure the situation actually has, in language they did not have before reading it. Once the structure is visible, the next move is usually something the person can see for themselves: the conversation that needs to happen with their partner, the explicit role agreement they need to make with their team, the change in how they treat the AI’s output. The counselor’s job is not to choose the move. It is to make the situation legible enough that the move is available.

I work entirely by asynchronous email; the cadence is part of what makes this possible, and I have written about that at length in why relating takes time. The medium itself is what allows the careful reading; the writing tradition behind the form is described in relating to AI as a written practice. The fuller account of what systemic counseling is and is not lives in that companion post.

The disclaimer

This piece is not a diagnosis of anyone’s situation, and the framework is not a substitute for the work itself. Austrian Lebens- und Sozialberatung does not address clinical diagnosis or mental illness. If what is showing up in your situation sits in clinical territory — active suicidal ideation, addiction in crisis, severe acute trauma, the aftermath of an event that needs in-the-room support — this is not the right resource, and I will say so directly in a reply. For the relational and role-structural confusion that does sit inside the scope of systemic counseling, the work is available, and the form is built for it.

If €99 is out of reach right now, write to [email protected]. Each request is read quietly.

This service is systemic counseling (Lebens- und Sozialberatung) pursuant to Austrian Gewerbeordnung. It is not psychotherapy and does not address diagnosis or mental illness. If you are experiencing a mental health crisis, please contact a licensed psychotherapist or emergency services.